• No results found

DISCUSSIONS ON JMETER RESULTS

In document ESTABLISHING A SERVICE COMPOSITION (Page 99-109)

Results Validation and Discussion

5.2 DISCUSSIONS ON JMETER RESULTS

The Context-Aware Smart Ward System is a centralized system which receives the transformed data from the smart-ward sensors and based on the situations defined attempts to discover the web services. Figure 5.1 is a

85

screen shot of the bed side display screen implemented solution. It serves as a look up screen for the doctor and the nurse. For demonstration purpose, a single browser window showing six hospital beds is shown in Figure 4.2.

The situation illustrated here is the monitoring of the pleural tube which acts as a drain for the pleural fluid and excess blood collection. The screen has a counter which depicts the collected volume in the drain bag.

The drain bag has a weight sensor which transmits the volume collected at specified intervals. The data aggregator engine has module required to convert raw analogue/digital data received from the sensors to meaningful information. The bedside display is presumably a monitor that is attached to the smart hospital bed. It logs the course actions taken by the system during a scenario. It also logs the web services which are coordinated in order to achieve the goal situation. As illustrate in the screen shot, in the right pane, it has displayed that the drain level has gone above threshold level.

Figure 5.1 Bedside display

Each of the item in the right pane indicates the respective services which are invoked. The left pane reports the results from the services. The middle pane has loaded an Xray image for the doctor to inspect. The screen also shows the basic information of the patient. Figure 5.2 shows the sequence diagram for the context aware smart ward system. Once the volume of a drain tube reaches its critical value of greater than 150ml, the system queries the UDDI server to identify a web service where pre-condition and post-condition match the current situation. On successful matching, the UDDI server returns the location of the web service as a URI and the WSDL file. The WSDL file contains the relevant input data and the output data parameters and format required by the web service. It returns a JSON data in XML format.

The WSDL file is parsed by the central system and the web service is invoked using the URI. The result returned by the service is logged to the bedside display unit as information to patient attending personnel. Figure 5.1 illustrates a screen shot of the bedside display unit. In this case, since the drain increase scenario is a critical situation the chk_alert service is invoked because the post-condition of the chk_weight service is

<drain_heigh,alert_on>. The pre-condition of the chk_alert service is

<alert_on>. The chk_alert service has turned the red indicator on the display.

In order to contact the doctor, the system checks the calendar of the doctor and the location service from his mobile to determine if the doctor is reachable. The system senses the presence of the doctor and brings relevant images from the medical database of the hospital. Since this context can result in either the doctor deciding for another surgery or for blood transfusion. The system calls the necessary service to check the operation

87

theatre schedule, availability of other surgery related personnel and blood availability. The information is presented at the bedside display. Based on the doctor’s decision the rest of the service is called to make arrangements for the operation.

Figure 5.2 Sequence Diagram for Context Aware Smart ward system

The prototype has been validated and tested using Apache JMeter™ 2.9 and the results are as follows. Figure 5.3 illustrates a screen shot from JMeter testing environment. The prototype was tested with five hundred concurrent users. The throughput, error percentage, mean, standard deviation and average bytes consumed were all tested. Apache JMeter was used to test performance both on static and dynamic resources Web Services, Data Bases and Queries. It was used to simulate a heavy load on a server, network to test its strength and analyze overall performance under different load types.

Table 5.1 tabulates the result after conducting a load test on the selected services. 150 samples with 100 threads were considered for the load test.

The result interpretations for the columns tabulated are as follows. All time are mentioned in milliseconds. Column Average gives the average time taken by the requests to be executed. The ITU engine module runs the main algorithm in composing the service. Therefore, an average time of

Figure 5.3 JMeter Testing Screen

89

514milliseconds is acceptable. However, the other services take the far low time to get executed. The Median column lists the middle value of the response time. It means when the time taken by the 150 samples of the service were sorted the middle value achieved is listed in that column. The column 90%, 90th Percentile, column illustrates 90% of the samples took no more than the mentioned time for execution. Error % column shows the percentage of script failure while the tests were carried out. The bandwidth column lists the traffic made by the services per second in Bytes. Figure 5.4 illustrates the plotted throughput values.

The Throughput measurement listed in Table 5.2 reveals the output given per unit time. This measure the amount of load applied on the server.

Throughput is calculated by dividing a total number of requests by total time.

Figure 5.4 Throughput plot *JMeter

Time in milliseconds

No of requests handled by the server

Table 5.1 Aggregate report tabulated by JMeter

Thread Group Average Time Median 90% line Min Time Max Time Error % Rate Bandwidth

ITU Engine 514 468 558 445 1467 0 1.504966 13.66612 Bed Side Display 34 33 41 29 79 0 1.523647 7.784884 monitor_vitals 94 89 108 78 154 0 1.523229 3.324626

monitor_drain 16 11 14 9 703 0 1.524979 4.500476

monitor_bpm 15 13 22 11 40 0 1.524855 2.541922

alert_ARST_team 16 14 23 11 33 0 1.524731 2.541715

alert_blood_bank 16 14 23 11 31 0 1.524576 2.541457

alert_doc 18 16 25 13 53 0 1.524452 5.780711

alert_OT_team 25 15 24 12 636 0 1.524297 5.437752

call_doctor 30 29 37 22 80 0 1.524142 49.18336

check_doctor 35 30 36 27 594 0 1.524111 2.97678

check_doctor_calendar 17 15 24 12 55 0 1.524483 4.975413

check_OT 17 15 24 12 33 0 1.524623 5.129224

get_blood_group 16 11 13 9 795 0 1.524871 2.978263

check_blood_avail 16 14 23 11 31 0 1.524731 2.541715 check_avail_ambulance 16 14 22 11 35 0 1.524685 2.541637 check_avail_ext_ambula

nce 16 14 23 11 31 0 1.524623 2.541534

check_blood_bank_ext 16 14 23 11 30 0 1.524561 2.541431

91

Table 5.2 Summary report produced by JMeter

Thread Group Throughput KB/sec Avg.

Bytes

ITU Engine 6.9 59.31 8816.7

Bed Side Display 4.6 23.52 5209.4

monitor_vitals 3.9 7.02 1865.8

monitor_drain 3.9 10.97 2913

monitor_bpm 3.2 5.49 1738.8

alert_ARST_team 3 5.05 1712.2

alert_blood_bank 2.6 4.42 1715.7

alert_doc 2.4 8.57 3639

alert_OT_team 2.3 7.16 3243.7

call_doctor 2.1 59.81 28788.5

check_doctor 2 3.76 1954.9

check_doctor_calendar 1.9 5.15 2793.6

check_OT 1.9 5.73 3157.6

get_blood_group 1.8 3.46 1986

check_blood_avail 1.8 2.96 1719.5

check_avail_ambulance 1.8 2.95 1711.2

check_avail_ext_ambulance 1.7 2.92 1711.3

check_blood_bank_ext 1.8 2.92 1709.4

Figure 5.5 Graph for Minimum, Average and Maximum time

Total time is the difference in time between the end time of the last sample and the start time of the first sample. The column KB/sec is the measure of throughput in terms of kilobytes and the column Average Bytes lists the amount of average bytes which were transacted during the test time.

Figure 5.6 Chart Graph Visualizer *JMeter

0 1000 2000 3000 4000 5000 6000

0 50 100

Time in milliseconds

No of Threads created

Min Average Max

Time in milliseconds

No of requests handled by the server

93

From the results, it can be inferred that the ITU engine has a throughput of 6.9 which indicates that ITU engine is capable of handling 6.9 transactions per second. Figure 5.5 illustrates the graph drawn for minimum, average and maximum time taken by the system. Figure 5.6 illustrates the Average, Median, Standard Deviation and Throughput from Table 5.1 and Table 5.2 plotted on a graph.

Figure 5.7 Chart for Spline Visualizer *JMeter

Figure 5.7 illustrates the test result through a Spline Visualizer. It provides a comprehensive view of the time taken by all the samples. An interpolating function based on polynomial approximation is used to construct a smooth continuous line (Using JMeter to Performance Test Web Services, 2014).

According to this chart, the average time taken is only 51 milliseconds.

Time in milliseconds

No of Samples

Figure 5.8 Chart depicting the Latency

Figure 5.8 illustrates the latency incurred during the test run. Latency is the delay in time while communicating a message(Raja, 2011). Even though the system contains a server and single client yet the latency is slightly higher since the load test was conducted on a home network with 64kbps broadband line. The result will significantly improve if the test was conducted on high speed dedicated network. Table 5.3 is record of the response time, Latency and the amount of data which was sent and received in bytes during a given time. The HTTP Response code 200 stands for Success. All time is measured in milliseconds.

0 10 20 30 40 50 60 70 80 90

55:38.0 55:41.0 55:44.0 55:47.0 55:50.0 55:53.0 55:56.0 55:59.0 56:02.0 56:05.0 56:08.0 56:11.0 56:14.0 56:17.0 56:20.0 56:23.0 56:26.0 56:29.0 56:32.0 56:35.0 56:38.0 56:42.0 56:45.0 56:48.0

In document ESTABLISHING A SERVICE COMPOSITION (Page 99-109)