concurrent.framework.samples.expensive.app module
Sample application which just does a set of expensive tasks on the system to demostrate its viability
File: expensive.app.py
-
class concurrent.framework.samples.expensive.app.ExpensiveNode(compmgr, init=None, cls=<class 'concurrent.framework.samples.expensive.app.ExpensiveNode'>)[source]
Bases: concurrent.framework.nodes.applicationnode.ApplicationNode
Application node distributing the computation of an expensive task
-
app_init()[source]
Called just before the main entry. Used as the initialization point instead of the ctor
-
app_main()[source]
Applications main entry
-
get_task_system()[source]
Called from the base class when we are connected to a MasterNode and we are
able to send computation tasks over
-
num_tasks
Number of tasks that must be performend
-
push_tasksystem_failed(result)[source]
We failed to push a ITaskSystem on the computation framework!
-
push_tasksystem_response(result)[source]
We just added a ITaskSystem on the framwork. Check result for more info
-
time_per_task
Time each task will perform on doing nothind (active wait) to simulate an expensive computation
-
work_finished(result, task_system)[source]
Called when the work has been done, the results is what our ITaskSystem
sent back to us. Check resukt for more info
-
class concurrent.framework.samples.expensive.app.ExpensiveNodeTaskSystem(time_per_task, num_tasks)[source]
Bases: concurrent.core.async.api.ITaskSystem
The task system that is executed on the MasterNode and controls what jobs are required to be performed
-
gather_result(master)[source]
Once the system stated that it has finsihed the MasterNode will request the required results that
are to be send to the originator. Returns the total time spend on the master.
-
generate_tasks(master)[source]
Create task set
-
init_system(master)[source]
Initialize the system
-
is_complete(master)[source]
Ask the system if the computation has finsihed. If not we will go on and generate more tasks. This
gets performed every time a tasks finishes.
-
task_finished(master, task, result, error)[source]
Called once a task has been performed
-
class concurrent.framework.samples.expensive.app.ExpensiveSimpleNode(compmgr, init=None, cls=<class 'concurrent.framework.samples.expensive.app.ExpensiveSimpleNode'>)[source]
Bases: concurrent.framework.nodes.applicationnode.ApplicationNode
Application node distributing the computation of the mandlebrot set using just tasks
-
app_init()[source]
Called just before the main entry. Used as the initialization point instead of the ctor
-
app_main()[source]
Applications main entry
-
check_finished()[source]
Check if we finsihed all computation or not
-
get_task_system()[source]
Called from the base class when we are connected to a MasterNode and we are
able to send computation tasks over
-
num_tasks
Number of tasks that must be performend
-
push_task_failed(result)[source]
We failed to add a Task to the computation framework
-
push_task_response(result)[source]
We just add a Task to the computation framework
-
push_tasks_failed(result)[source]
We failed to add a set of Tasks to the computation framework
-
push_tasks_response(result)[source]
We just add a set of Tasks to the computation framework
-
send_task_batch
Should we send all tasks one by one or should we batch them into a hughe list
-
start_processing()[source]
Called when the app is not using a ITaskSystem and will instead just add tasks and
will take care of the task flow itself
-
task_finished(task, result, error)[source]
Called when a task has been done
-
time_per_task
Time each task will perform on doing nothind (active wait) to simulate an expensive computation
-
class concurrent.framework.samples.expensive.app.ExpensiveTask(name, system_id, client_id, **kwargs)[source]
Bases: concurrent.core.async.task.Task
-
clean_up()[source]
Called once a task has been performed and its results are about to be sent back. This is used
to optimize our network and to cleanup the tasks input data
-
finished(result, error)[source]
Once the task is finished. Called on the MasterNode within the main thread once
the node has recovered the result data.