-
-
Project 5 Video Demo Link: https://drive.google.com/file/d/1S4QXT324b7siZw6OeEe6XEDrIgOODeeu/view?usp=sharing
-
Clone the above repo, and run the createtable.sql storedprocedures.sql and movie-data.sql then execute the XML parser to parse through all data and store standford movie data onto the mysql database that must be avaialable on your computer. Then run
mvn pacakge
to build .war file and test out the website using the formathttp//<ip>:<port>/cs122b-project1-api-example/login.html
note if you want to go to the load balancer and test our the master-slave replication use the load balancer ip address with the port80
instead of the port8080
.
-
Prepared Statement Locations:
-
Connection pooling optimizes the utilization of connection resources to our MySQL server by efficiently reusing existing connections. This approach minimizes the processing time required for establishing entirely new connections. In our codebase, every servlet utilizing JDBC to interact with the database relies on the resource configuration specified in the context.xml file to establish connections. The content file, located at the specified path, contains details confirming the successful reuse of connections within our application.
-
Connection pooling is implemented efficiently with two backend SQL servers in our setup. Each database has its dedicated connection pool, ensuring that the workload is evenly distributed. When requests are processed through a load balancer, it intelligently directs them to the appropriate server or pool based on the nature of the request. This approach enhances efficiency by distributing the workload across two servers while still benefiting from connection pooling to minimize the creation of new connections. In essence, the load balancer facilitates the reuse of connections within the two backend servers through the utilization of Connection Pool technology. This strategy contributes to improved performance and resource utilization in our database interactions.
-
-
/project1/WebContent/META-INF/content.xml
-
/project1/master_content.xml
-
/project1/slave_content.xml
-
From the load balancer read requests can be sent to either the Master or the Slave, and when received the slave/master servers will use their content.xml to connect to their local database and execute the respective query. However in the instance that write requests are given to the master or the slave the master will continue to go to its local database using a connection from the connection pool if possible and the slave will connect to the master's database using the same concept however it is worth noting that the connection pools are seperate for both servers hence each one must make its own connection independent to the queries being sent to the other server.
-
- The log processing scrip is located at 2023-fall-cs122b-mango/project1/logParser.py
- It processes the script by reading from the log file which gets created when server receives requests and calculates the averages for TS and TJ. To run it we can use python3 logParser.py in the directory to get to the parsed outputs.
- Note: all log files are located in the folder
/project1/project5-logs/
-
In our project, we employed JMeter to conduct load testing on both our load balancer server and a conventional website server, employing distinct thread groups and utilizing varied protocols, including HTTP and HTTPS. It has come to my attention that, upon running JMeter at different instances, there is a notable variance in the collected data. This phenomenon is attributed to the inherent dynamism of the AWS instance, which, when subjected to varying loads or stresses, may influence the performance of our servers.
-
The output presented in the aforementioned tables is considered relatively accurate, bearing in mind the dynamic nature of the AWS infrastructure. However, it should be acknowledged that the data's absolute accuracy may be subject to fluctuations depending on the varying load conditions imposed on the AWS instance.
-
Additionally, an observation surfaced during the analysis phase, revealing a similarity in query times, as well as times associated (TS) and (TJ). Contrary to our initial expectations, where the incorporation of a load balancer and connection pooling was anticipated to yield significant improvements in processing speed, the observed marginal impact could be indicative of enhancements occurring on a smaller scale. This realization may elucidate the observed data patterns.