Experiment Execution Tool
Recall at the end of CREW experiment definition tool, we end up with a tar package containing three different files. The first file, ISR.xml, is an XML experiment description file for the ISR experiment. The second file generated, ISR.rb, is an experiment description of the type OEDL (OMF Experiment Description Language) and the last file generated, ISR.ns, is a network simulation file containing all configuration settings to be used inside emulab framework. Current implementation of the CREW experiment execution tool only works on top of OMF (cOntrol and Management Framework) testbeds. But it is designed to be versatile and work with different testbeds having their own management and control framework. The tool interacts with the testbed using a specific API designed for the testbed. Thus working on a different testbed with a different framework only relies on the availability of interfacing APIs and minor change on the framework itself.
Coming back to what was left at the end of the experiment definition tool, we start this section by using the two files generated (i.e. ISR.rb and ISR.ns). Details of the ISR.rb file and OEDL language are described on a separate page. However, to have a deeper understanding of the language details, one can study OEDL 5.4.
We start by definining the experiment topology on the emulab framework [1] using the NS file ISR.ns. Tutorial about creating your first experiment on the emulab framework and explanation of the NS file follows this page. After finish defining your topology and swapping in your experiment in the emulab framework, start the experiment execution tool. The experiment execution tool allows experimenters load an experiment description, configure parameters, schedule, and finally start an experiment. Figure 1 shows the front view of the experiment execution tool after the OEDL file (i.e. ISR.rb) is loaded.
Figure 1. Experiment execution tool at glance.
After loading the file, four different sections are automatically are populated and each section performs a specific task.
- Parameter Optimization Section configures single/multi dimensional optimizer that either maximizes or minimizes an objective performance.
- Performance Visualization Section configures parameters to be visualized during experiment execution.
- Interference Estimation Section configures the pre/post interference estimation of experiments and detect unwanted wireless interference that could influence the experiment.
- Experiment Rounds section configures the number of identical runs an experiment is executed.
Recall the scenario of ISR experiment where Node 1 performs a file transfer to Node 2 while at the same time listening to a high quality (192kbps) radio channel from Node 3 and Node 2 is watching a 10Mbps movie stream from Node 3.
Now starting from this basic configuration, let us say we want to know how large the video bandwidth from Node 3 to Node 2 can increase so that we see a high definition movie at the highest possible quality. This is an optimization problem and next we see how to deal with such a problem using the experiment execution tool.
- Select IVuC optimizer from the Parameter Optimization section and click the text on the left side to reveal its content.
- Select Node3_UDP2_bandwidth as design space parameter located at the end of the Tune list.
- Start from 10,000,000 bps using step size 2,000,000 bps and reduction rate of 0.5. The optimizer either stops once the step size is reduced to 100,000 bps or when search parameter exceeds 20,000,000 bps.
- In the objective variables subsection, configure two variables to calculate the sending and receiving bandwidth using the aggregation AVG.[ Note: Click the plus icon on the right hand corner to add extra variables]
- Define your objective function as a ratio of the two variables (i.e. (8*x1)/x2) defined previously and select the condition less than with a stopping criteria 0.95 (i.e. (8*x1)/x2 < 0.95 => Stop when 8*x1 < 0.95*x2). [ Note: The reason 8 is multiplied to x1 variable is because iperf reports bandwidth in byte/sec].
Figure 2 shows the configuration details of parameter optimization section.
Figure 2. IVuC optimizer configuration for the ISR experiment.
The next thing we do is configure the performance visualization section where we define parameters to be visualized during experiment execution. For this exercise, we visualize the UDP and TCP bandwidth performance at Node2 coming from Node3 and Node1 respectively. As the wireless medium is shared among all Nodes, the UDP bandwidth stream from Node3 is affected by the TCP bandwidth stream from Node1 and visualizing these two parameters reveal this fact. Figure 3 shows the configuration detail of performance visualization section.
Figure 3. Performance visualization configuration for the ISR experiment
After that, we can enable interference estimation and check for possible pre and post experiment interferences. This is one way of detecting outlier experiments as there is a higher possibility for experiments to be interfered if the environment before and after their execution is interfered. For now, we skip this section. Similar to the experiment definition tool, execution configuration settings can be saved to a file. Click the button to update the execution configuration into the OEDL file and later reloading the file starts the experiment execution page pre configured. Finally, set the number of experiment rounds to one and click on the button .
Once the experiment has started, the first thing we see is a log screen populated with debug information. The debug information contains high level information about the experiment such as defined properties, experiment ID, events triggered, and a lot more. The log screen view being an important interaction tool to the experimenter, there is also a second view called the graph screen view. To switch to the graph screen view, click the notebook icon from the experimentation screen. The graph screen view displays parameters that were defined in the performance visualization section (see above) as a function of time. For the ISR experiment, the UDP and TCP bandwidth plot as a function of time from node3 and node1 are displayed respectively. Figure 4 shows the graph screen view of the ISR experiment.
Figure 4. A glance at the graph screen view from experimentation screen.
From the above figure, we note a couple of things. Top on the left, there is the graph icon which triggers the debug information view up on clicking. Next is the experiment status (i.e. Scheduled, Running, Finished) indicator. The label check box turns the label ON and OFF. Execution of SQL statements is allowed after the experiment has stopped running. SQL statements are written in the log file viewer by which switching to this view, writing your SQL statements and pressing the icon executes your statements. Finally the UDP and TCP bandwidth as a function of time are plotted and it also indicates an equal bandwidth share between both applications. Not shown on the figure but also found on the experimentation screen are parameter settings, objective value, correlation matrix table, and experiment run count.
Following the whole experimentation process, we see the ObjFx. Value (i.e. (8*x1)/x2) starts around 1 and decreases beyond 0.95 which triggers the second experimentation cycle. The optimization process carries on and stops when step size of Node3_UDP2_bandwidth reaches 100,000 bps. The whole process takes around 11 minutes under normal condition. The IVuC optimizer locates an optimal bandwidth of 14.125 Mbps UDP traffic from Node3 to Node2. Therefore the highest bit rate a movie can be streamed is at 14.125 Mbps.
Finally, the experimenter has the option to save experiment traces and later perform post processing on the stored data. CREW also incorporates a post processing tool to facilitate benchmarking and result analysis. The aim of this tool is to make comparison of different experiments through graph plots and performance scores. A performance score can be an objective or a subjective indicator calculated from a number of experiment metrics. The experiments themselves doesn't need to be conducted on a single testbed or with in a specific time bound, as long as experiment scenarios (aka experiment meta data) fit in the same context. Thus an experiment once conducted can be re-executed at a latter time in the same testbed or executed on a different testbed and conduct perform comparison of the two solutions.
For this tutorial save the experiment (i.e. press the Save experiment result button) by giving a name (e.g. Solution_one) and start post processing on the saved data using the benchmarking and result analysis tool. Please be reminded that only selected experiment runs are saved to a file and you should tick the check box next to each experiment run that you want to save.