Curl the IP corresponding to 10.96.114.184. This curl request reaches one of the 10 pods in the deployment “nginx-deployment” in a round robin fashion. What happens when we execute the expose command is that a kubernetes Service is created of type Cluster IP so that all the pods behind this service are accessible through a single local IP (10.96.114.184, here).
It is possible to have a public IP instead (i.e an actual external load balancer) by creating a Service of type LoadBalancer. Do feel free to play around with it!
-
The above exercises a pretty good exposure to using Kubernetes to manage large scale deployments. Trust me, the process is very similar to the above for operating 1000 deployments and containers too! While a Deployment object is good enough for managing stateless applications, Kuberenetes provides other resources like Job, Daemonset, Cronjob, Statefulset etc. to manage special use cases.
+
The above exercises a pretty good exposure to using Kubernetes to manage large scale deployments. Trust me, the process is very similar to the above for operating 1000 deployments and containers too! While a Deployment object is good enough for managing stateless applications, Kubernetes provides other resources like Job, Daemonset, Cronjob, Statefulset etc. to manage special use cases.
eAdditional labs:
https://kubernetes.courselabs.co/ (Huge number of free follow-along exercises to play with Kubernetes)
Advanced topics
Most often than not, microservices orchestrated with Kubernetes contain dozens of instances of resources like deployment, services and configs. The manifests for these applications can be auto- generated with Helm templates and passed on as Helm charts. Similar to how we have PiPy for python packages there are remote repositories like Bitnami where Helm charts (e.g for setting up a production-ready Prometheus or Kafka with a single click) can be downloaded and used. This is a good place to begin.
-
Kuberenetes provides the flexibility to create our custom resources (similar to Deployment or the Pod which we saw). For instance, if you want to create 5 instances of a resource with kind as SchoolOfSre you can! The only thing is that you have to write your custom resource for it. You can also build a custom operator for your custom resource to take certain actions on the resource instance. You can check here for more information.
+
Kubernetes provides the flexibility to create our custom resources (similar to Deployment or the Pod which we saw). For instance, if you want to create 5 instances of a resource with kind as SchoolOfSre you can! The only thing is that you have to write your custom resource for it. You can also build a custom operator for your custom resource to take certain actions on the resource instance. You can check here for more information.
diff --git a/search/search_index.json b/search/search_index.json
index 16a04ac8..c25f3758 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"School of SRE Site Reliability Engineers (SREs) sits at the intersection of software engineering and systems engineering. While there are potentially infinite permutations and combinations of how infrastructure and software components can be put together to achieve an objective, focusing on foundational skills allows SREs to work with complex systems and software, regardless of whether these systems are proprietary, 3rd party, open systems, run on cloud/on-prem infrastructure, etc. Particularly important is to gain a deep understanding of how these areas of systems and infrastructure relate to each other and interact with each other. The combination of software and systems engineering skills is rare and is generally built over time with exposure to a wide variety of infrastructure, systems, and software. SREs bring in engineering practices to keep the site up. Each distributed system is an agglomeration of many components. SREs validate business requirements, convert them to SLAs for each of the components that constitute the distributed system, monitor and measure adherence to SLAs, re-architect or scale out to mitigate or avoid SLA breaches, add these learnings as feedback to new systems or projects and thereby reduce operational toil. Hence SREs play a vital role right from the day 0 design of the system. In early 2019, we started visiting campuses across India to recruit the best and brightest minds to make sure LinkedIn, and all the services that make up its complex technology stack are always available for everyone. This critical function at LinkedIn falls under the purview of the Site Engineering team and Site Reliability Engineers (SREs) who are Software Engineers, specialized in reliability. As we continued on this journey we started getting a lot of questions from these campuses on what exactly the site reliability engineering role entails? And, how could someone learn the skills and the disciplines involved to become a successful site reliability engineer? Fast forward a few months, and a few of these campus students had joined LinkedIn either as interns or as full-time engineers to become a part of the Site Engineering team; we also had a few lateral hires who joined our organization who were not from a traditional SRE background. That's when a few of us got together and started to think about how we can onboard new graduate engineers to the Site Engineering team. There are very few resources out there guiding someone on the basic skill sets one has to acquire as a beginner SRE. Because of the lack of these resources, we felt that individuals have a tough time getting into open positions in the industry. We created the School Of SRE as a starting point for anyone wanting to build their career as an SRE. In this course, we are focusing on building strong foundational skills. The course is structured in a way to provide more real life examples and how learning each of these topics can play an important role in day to day job responsibilities of an SRE. Currently we are covering the following topics under the School Of SRE: Level 101 Fundamentals Series Linux Basics Git Linux Networking Python and Web Data Relational databases(MySQL) NoSQL concepts Big Data Systems Design Metrics and Monitoring Security Level 102 Linux Intermediate Linux Advanced Containers and orchestration System Calls and Signals Networking System Design System troubleshooting and performance improvements Continuous Integration and Continuous Delivery We believe continuous learning will help in acquiring deeper knowledge and competencies in order to expand your skill sets, every module has added references that could be a guide for further learning. Our hope is that by going through these modules we should be able to build the essential skills required for a Site Reliability Engineer. At LinkedIn, we are using this curriculum for onboarding our non-traditional hires and new college grads into the SRE role. We had multiple rounds of successful onboarding experiences with new employees and the course helped them be productive in a very short period of time. This motivated us to open source the content for helping other organizations in onboarding new engineers into the role and provide guidance for aspiring individuals to get into the role. We realize that the initial content we created is just a starting point and we hope that the community can help in the journey of refining and expanding the content. Check out the contributing guide to get started.","title":"Home"},{"location":"#school-of-sre","text":"Site Reliability Engineers (SREs) sits at the intersection of software engineering and systems engineering. While there are potentially infinite permutations and combinations of how infrastructure and software components can be put together to achieve an objective, focusing on foundational skills allows SREs to work with complex systems and software, regardless of whether these systems are proprietary, 3rd party, open systems, run on cloud/on-prem infrastructure, etc. Particularly important is to gain a deep understanding of how these areas of systems and infrastructure relate to each other and interact with each other. The combination of software and systems engineering skills is rare and is generally built over time with exposure to a wide variety of infrastructure, systems, and software. SREs bring in engineering practices to keep the site up. Each distributed system is an agglomeration of many components. SREs validate business requirements, convert them to SLAs for each of the components that constitute the distributed system, monitor and measure adherence to SLAs, re-architect or scale out to mitigate or avoid SLA breaches, add these learnings as feedback to new systems or projects and thereby reduce operational toil. Hence SREs play a vital role right from the day 0 design of the system. In early 2019, we started visiting campuses across India to recruit the best and brightest minds to make sure LinkedIn, and all the services that make up its complex technology stack are always available for everyone. This critical function at LinkedIn falls under the purview of the Site Engineering team and Site Reliability Engineers (SREs) who are Software Engineers, specialized in reliability. As we continued on this journey we started getting a lot of questions from these campuses on what exactly the site reliability engineering role entails? And, how could someone learn the skills and the disciplines involved to become a successful site reliability engineer? Fast forward a few months, and a few of these campus students had joined LinkedIn either as interns or as full-time engineers to become a part of the Site Engineering team; we also had a few lateral hires who joined our organization who were not from a traditional SRE background. That's when a few of us got together and started to think about how we can onboard new graduate engineers to the Site Engineering team. There are very few resources out there guiding someone on the basic skill sets one has to acquire as a beginner SRE. Because of the lack of these resources, we felt that individuals have a tough time getting into open positions in the industry. We created the School Of SRE as a starting point for anyone wanting to build their career as an SRE. In this course, we are focusing on building strong foundational skills. The course is structured in a way to provide more real life examples and how learning each of these topics can play an important role in day to day job responsibilities of an SRE. Currently we are covering the following topics under the School Of SRE: Level 101 Fundamentals Series Linux Basics Git Linux Networking Python and Web Data Relational databases(MySQL) NoSQL concepts Big Data Systems Design Metrics and Monitoring Security Level 102 Linux Intermediate Linux Advanced Containers and orchestration System Calls and Signals Networking System Design System troubleshooting and performance improvements Continuous Integration and Continuous Delivery We believe continuous learning will help in acquiring deeper knowledge and competencies in order to expand your skill sets, every module has added references that could be a guide for further learning. Our hope is that by going through these modules we should be able to build the essential skills required for a Site Reliability Engineer. At LinkedIn, we are using this curriculum for onboarding our non-traditional hires and new college grads into the SRE role. We had multiple rounds of successful onboarding experiences with new employees and the course helped them be productive in a very short period of time. This motivated us to open source the content for helping other organizations in onboarding new engineers into the role and provide guidance for aspiring individuals to get into the role. We realize that the initial content we created is just a starting point and we hope that the community can help in the journey of refining and expanding the content. Check out the contributing guide to get started.","title":"School of SRE"},{"location":"CODE_OF_CONDUCT/","text":"This code of conduct outlines expectations for participation in LinkedIn-managed open source communities, as well as steps for reporting unacceptable behavior. We are committed to providing a welcoming and inspiring community for all. People violating this code of conduct may be banned from the community. Our open source communities strive to: Be friendly and patient: Remember you might not be communicating in someone else's primary spoken or programming language, and others may not have your level of understanding. Be welcoming: Our communities welcome and support people of all backgrounds and identities. This includes, but is not limited to members of any race, ethnicity, culture, national origin, color, immigration status, social and economic class, educational level, sex, sexual orientation, gender identity and expression, age, size, family status, political belief, religion, and mental and physical ability. Be respectful: We are a world-wide community of professionals, and we conduct ourselves professionally. Disagreement is no excuse for poor behavior and poor manners. Disrespectful and unacceptable behavior includes, but is not limited to: Violent threats or language. Discriminatory or derogatory jokes and language. Posting sexually explicit or violent material. Posting, or threatening to post, people's personally identifying information (\"doxing\"). Insults, especially those using discriminatory terms or slurs. Behavior that could be perceived as sexual attention. Advocating for or encouraging any of the above behaviors. Understand disagreements: Disagreements, both social and technical, are useful learning opportunities. Seek to understand the other viewpoints and resolve differences constructively. This code is not exhaustive or complete. It serves to capture our common understanding of a productive, collaborative environment. We expect the code to be followed in spirit as much as in the letter. Scope This code of conduct applies to all repos and communities for LinkedIn-managed open source projects regardless of whether or not the repo explicitly calls out its use of this code. The code also applies in public spaces when an individual is representing a project or its community. Examples include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Note: Some LinkedIn-managed communities have codes of conduct that pre-date this document and issue resolution process. While communities are not required to change their code, they are expected to use the resolution process outlined here. The review team will coordinate with the communities involved to address your concerns. Reporting Code of Conduct Issues We encourage all communities to resolve issues on their own whenever possible. This builds a broader and deeper understanding and ultimately a healthier interaction. In the event that an issue cannot be resolved locally, please feel free to report your concerns by contacting oss@linkedin.com . In your report please include: Your contact information. Names (real, usernames or pseudonyms) of any individuals involved. If there are additional witnesses, please include them as well. Your account of what occurred, and if you believe the incident is ongoing. If there is a publicly available record (e.g. a mailing list archive or a public chat log), please include a link or attachment. Any additional information that may be helpful. All reports will be reviewed by a multi-person team and will result in a response that is deemed necessary and appropriate to the circumstances. Where additional perspectives are needed, the team may seek insight from others with relevant expertise or experience. The confidentiality of the person reporting the incident will be kept at all times. Involved parties are never part of the review team. Anyone asked to stop unacceptable behavior is expected to comply immediately. If an individual engages in unacceptable behavior, the review team may take any action they deem appropriate, including a permanent ban from the community. This code of conduct is based on the Microsoft Open Source Code of Conduct which was based on the template established by the TODO Group and used by numerous other large communities (e.g., Facebook , Yahoo , Twitter , GitHub ) and the Scope section from the Contributor Covenant version 1.4 .","title":"Code of Conduct"},{"location":"CODE_OF_CONDUCT/#scope","text":"This code of conduct applies to all repos and communities for LinkedIn-managed open source projects regardless of whether or not the repo explicitly calls out its use of this code. The code also applies in public spaces when an individual is representing a project or its community. Examples include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Note: Some LinkedIn-managed communities have codes of conduct that pre-date this document and issue resolution process. While communities are not required to change their code, they are expected to use the resolution process outlined here. The review team will coordinate with the communities involved to address your concerns.","title":"Scope"},{"location":"CODE_OF_CONDUCT/#reporting-code-of-conduct-issues","text":"We encourage all communities to resolve issues on their own whenever possible. This builds a broader and deeper understanding and ultimately a healthier interaction. In the event that an issue cannot be resolved locally, please feel free to report your concerns by contacting oss@linkedin.com . In your report please include: Your contact information. Names (real, usernames or pseudonyms) of any individuals involved. If there are additional witnesses, please include them as well. Your account of what occurred, and if you believe the incident is ongoing. If there is a publicly available record (e.g. a mailing list archive or a public chat log), please include a link or attachment. Any additional information that may be helpful. All reports will be reviewed by a multi-person team and will result in a response that is deemed necessary and appropriate to the circumstances. Where additional perspectives are needed, the team may seek insight from others with relevant expertise or experience. The confidentiality of the person reporting the incident will be kept at all times. Involved parties are never part of the review team. Anyone asked to stop unacceptable behavior is expected to comply immediately. If an individual engages in unacceptable behavior, the review team may take any action they deem appropriate, including a permanent ban from the community. This code of conduct is based on the Microsoft Open Source Code of Conduct which was based on the template established by the TODO Group and used by numerous other large communities (e.g., Facebook , Yahoo , Twitter , GitHub ) and the Scope section from the Contributor Covenant version 1.4 .","title":"Reporting Code of Conduct Issues"},{"location":"CONTRIBUTING/","text":"We realise that the initial content we created is just a starting point and our hope is that the community can help in the journey refining and extending the contents. As a contributor, you represent that the content you submit is not plagiarised. By submitting the content, you (and, if applicable, your employer) are licensing the submitted content to LinkedIn and the open source community subject to the Creative Commons Attribution 4.0 International Public License. Repository URL : https://github.com/linkedin/school-of-sre Contributing Guidelines Ensure that you adhere to the following guidelines: Should be about principles and concepts that can be applied in any company or individual project. Do not focus on particular tools or tech stack(which usually change over time). Adhere to the Code of Conduct . Should be relevant to the roles and responsibilities of an SRE. Should be locally tested (see steps for testing) and well formatted. It is good practice to open an issue first and discuss your changes before submitting a pull request. This way, you can incorporate ideas from others before you even start. Building and testing locally Run the following commands to build and view the site locally before opening a PR. python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt mkdocs build mkdocs serve Opening a PR Follow the GitHub PR workflow for your contributions. Fork this repo, create a feature branch, commit your changes and open a PR to this repo.","title":"Contribute"},{"location":"CONTRIBUTING/#contributing-guidelines","text":"Ensure that you adhere to the following guidelines: Should be about principles and concepts that can be applied in any company or individual project. Do not focus on particular tools or tech stack(which usually change over time). Adhere to the Code of Conduct . Should be relevant to the roles and responsibilities of an SRE. Should be locally tested (see steps for testing) and well formatted. It is good practice to open an issue first and discuss your changes before submitting a pull request. This way, you can incorporate ideas from others before you even start.","title":"Contributing Guidelines"},{"location":"CONTRIBUTING/#building-and-testing-locally","text":"Run the following commands to build and view the site locally before opening a PR. python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt mkdocs build mkdocs serve","title":"Building and testing locally"},{"location":"CONTRIBUTING/#opening-a-pr","text":"Follow the GitHub PR workflow for your contributions. Fork this repo, create a feature branch, commit your changes and open a PR to this repo.","title":"Opening a PR"},{"location":"sre_community/","text":"We are having an active LinkedIn community for School of SRE. Please join the group via : https://www.linkedin.com/groups/12493545/ The group has members with different levels of experience in site reliability engineering. There are active conversation on different technical topics centered around site reliability engineering. We encourage everyone to join the conversation and learn from each other and build a successful career in the SRE space.","title":"SRE Community"},{"location":"level101/big_data/evolution/","text":"Evolution of Hadoop Architecture of Hadoop HDFS The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. HDFS is part of the Apache Hadoop Core project . The main components of HDFS include: 1. NameNode: is the arbitrator and central repository of file namespace in the cluster. The NameNode executes the operations such as opening, closing, and renaming files and directories. 2. DataNode: manages the storage attached to the node on which it runs. It is responsible for serving all the read and writes requests. It performs operations on instructions on NameNode such as creation, deletion, and replications of blocks. 3. Client: Responsible for getting the required metadata from the namenode and then communicating with the datanodes for reads and writes. YARN YARN stands for \u201cYet Another Resource Negotiator\u201c. It was introduced in Hadoop 2.0 to remove the bottleneck on Job Tracker which was present in Hadoop 1.0. YARN was described as a \u201cRedesigned Resource Manager\u201d at the time of its launching, but it has now evolved to be known as a large-scale distributed operating system used for Big Data processing. The main components of YARN architecture include: 1. Client: It submits map-reduce(MR) jobs to the resource manager. 2. Resource Manager: It is the master daemon of YARN and is responsible for resource assignment and management among all the applications. Whenever it receives a processing request, it forwards it to the corresponding node manager and allocates resources for the completion of the request accordingly. It has two major components: 1. Scheduler: It performs scheduling based on the allocated application and available resources. It is a pure scheduler, which means that it does not perform other tasks such as monitoring or tracking and does not guarantee a restart if a task fails. The YARN scheduler supports plugins such as Capacity Scheduler and Fair Scheduler to partition the cluster resources. 2. Application manager: It is responsible for accepting the application and negotiating the first container from the resource manager. It also restarts the Application Manager container if a task fails. 3. Node Manager: It takes care of individual nodes on the Hadoop cluster and manages application and workflow and that particular node. Its primary job is to keep up with the Node Manager. It monitors resource usage, performs log management, and also kills a container based on directions from the resource manager. It is also responsible for creating the container process and starting it at the request of the Application master. 4. Application Master: An application is a single job submitted to a framework. The application manager is responsible for negotiating resources with the resource manager, tracking the status, and monitoring the progress of a single application. The application master requests the container from the node manager by sending a Container Launch Context(CLC) which includes everything an application needs to run. Once the application is started, it sends the health report to the resource manager from time-to-time. 5. Container: It is a collection of physical resources such as RAM, CPU cores, and disk on a single node. The containers are invoked by Container Launch Context(CLC) which is a record that contains information such as environment variables, security tokens, dependencies, etc. MapReduce framework The term MapReduce represents two separate and distinct tasks Hadoop programs perform-Map Job and Reduce Job. Map jobs take data sets as input and process them to produce key-value pairs. Reduce job takes the output of the Map job i.e. the key-value pairs and aggregates them to produce desired results. Hadoop MapReduce (Hadoop Map/Reduce) is a software framework for distributed processing of large data sets on computing clusters. Mapreduce helps to split the input data set into a number of parts and run a program on all data parts parallel at once. Please find the below Word count example demonstrating the usage of the MapReduce framework: Other tooling around Hadoop Hive Uses a language called HQL which is very SQL like. Gives non-programmers the ability to query and analyze data in Hadoop. Is basically an abstraction layer on top of map-reduce. Ex. HQL query: SELECT pet.name, comment FROM pet JOIN event ON (pet.name = event.name); In mysql: SELECT pet.name, comment FROM pet, event WHERE pet.name = event.name; Pig Uses a scripting language called Pig Latin, which is more workflow driven. Don't need to be an expert Java programmer but need a few coding skills. Is also an abstraction layer on top of map-reduce. Here is a quick question for you: What is the output of running the pig queries in the right column against the data present in the left column in the below image? Output: 7,Komal,Nayak,24,9848022334,trivendram 8,Bharathi,Nambiayar,24,9848022333,Chennai 5,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar 6,Archana,Mishra,23,9848022335,Chennai Spark Spark provides primitives for in-memory cluster computing that allows user programs to load data into a cluster\u2019s memory and query it repeatedly, making it well suited to machine learning algorithms. Presto Presto is a high performance, distributed SQL query engine for Big Data. Its architecture allows users to query a variety of data sources such as Hadoop, AWS S3, Alluxio, MySQL, Cassandra, Kafka, and MongoDB. Example presto query: use studentDB; show tables; SELECT roll_no, name FROM studentDB.studentDetails where section=\u2019A\u2019 limit 5; Data Serialisation and storage In order to transport the data over the network or to store on some persistent storage, we use the process of translating data structures or objects state into binary or textual form. We call this process serialization.. Avro data is stored in a container file (a .avro file) and its schema (the .avsc file) is stored with the data file. Apache Hive provides support to store a table as Avro and can also query data in this serialisation format.","title":"Evolution and Architecture of Hadoop"},{"location":"level101/big_data/evolution/#evolution-of-hadoop","text":"","title":"Evolution of Hadoop"},{"location":"level101/big_data/evolution/#architecture-of-hadoop","text":"HDFS The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. HDFS is part of the Apache Hadoop Core project . The main components of HDFS include: 1. NameNode: is the arbitrator and central repository of file namespace in the cluster. The NameNode executes the operations such as opening, closing, and renaming files and directories. 2. DataNode: manages the storage attached to the node on which it runs. It is responsible for serving all the read and writes requests. It performs operations on instructions on NameNode such as creation, deletion, and replications of blocks. 3. Client: Responsible for getting the required metadata from the namenode and then communicating with the datanodes for reads and writes. YARN YARN stands for \u201cYet Another Resource Negotiator\u201c. It was introduced in Hadoop 2.0 to remove the bottleneck on Job Tracker which was present in Hadoop 1.0. YARN was described as a \u201cRedesigned Resource Manager\u201d at the time of its launching, but it has now evolved to be known as a large-scale distributed operating system used for Big Data processing. The main components of YARN architecture include: 1. Client: It submits map-reduce(MR) jobs to the resource manager. 2. Resource Manager: It is the master daemon of YARN and is responsible for resource assignment and management among all the applications. Whenever it receives a processing request, it forwards it to the corresponding node manager and allocates resources for the completion of the request accordingly. It has two major components: 1. Scheduler: It performs scheduling based on the allocated application and available resources. It is a pure scheduler, which means that it does not perform other tasks such as monitoring or tracking and does not guarantee a restart if a task fails. The YARN scheduler supports plugins such as Capacity Scheduler and Fair Scheduler to partition the cluster resources. 2. Application manager: It is responsible for accepting the application and negotiating the first container from the resource manager. It also restarts the Application Manager container if a task fails. 3. Node Manager: It takes care of individual nodes on the Hadoop cluster and manages application and workflow and that particular node. Its primary job is to keep up with the Node Manager. It monitors resource usage, performs log management, and also kills a container based on directions from the resource manager. It is also responsible for creating the container process and starting it at the request of the Application master. 4. Application Master: An application is a single job submitted to a framework. The application manager is responsible for negotiating resources with the resource manager, tracking the status, and monitoring the progress of a single application. The application master requests the container from the node manager by sending a Container Launch Context(CLC) which includes everything an application needs to run. Once the application is started, it sends the health report to the resource manager from time-to-time. 5. Container: It is a collection of physical resources such as RAM, CPU cores, and disk on a single node. The containers are invoked by Container Launch Context(CLC) which is a record that contains information such as environment variables, security tokens, dependencies, etc.","title":"Architecture of Hadoop"},{"location":"level101/big_data/evolution/#mapreduce-framework","text":"The term MapReduce represents two separate and distinct tasks Hadoop programs perform-Map Job and Reduce Job. Map jobs take data sets as input and process them to produce key-value pairs. Reduce job takes the output of the Map job i.e. the key-value pairs and aggregates them to produce desired results. Hadoop MapReduce (Hadoop Map/Reduce) is a software framework for distributed processing of large data sets on computing clusters. Mapreduce helps to split the input data set into a number of parts and run a program on all data parts parallel at once. Please find the below Word count example demonstrating the usage of the MapReduce framework:","title":"MapReduce framework"},{"location":"level101/big_data/evolution/#other-tooling-around-hadoop","text":"Hive Uses a language called HQL which is very SQL like. Gives non-programmers the ability to query and analyze data in Hadoop. Is basically an abstraction layer on top of map-reduce. Ex. HQL query: SELECT pet.name, comment FROM pet JOIN event ON (pet.name = event.name); In mysql: SELECT pet.name, comment FROM pet, event WHERE pet.name = event.name; Pig Uses a scripting language called Pig Latin, which is more workflow driven. Don't need to be an expert Java programmer but need a few coding skills. Is also an abstraction layer on top of map-reduce. Here is a quick question for you: What is the output of running the pig queries in the right column against the data present in the left column in the below image? Output: 7,Komal,Nayak,24,9848022334,trivendram 8,Bharathi,Nambiayar,24,9848022333,Chennai 5,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar 6,Archana,Mishra,23,9848022335,Chennai Spark Spark provides primitives for in-memory cluster computing that allows user programs to load data into a cluster\u2019s memory and query it repeatedly, making it well suited to machine learning algorithms. Presto Presto is a high performance, distributed SQL query engine for Big Data. Its architecture allows users to query a variety of data sources such as Hadoop, AWS S3, Alluxio, MySQL, Cassandra, Kafka, and MongoDB. Example presto query: use studentDB; show tables; SELECT roll_no, name FROM studentDB.studentDetails where section=\u2019A\u2019 limit 5;","title":"Other tooling around Hadoop"},{"location":"level101/big_data/evolution/#data-serialisation-and-storage","text":"In order to transport the data over the network or to store on some persistent storage, we use the process of translating data structures or objects state into binary or textual form. We call this process serialization.. Avro data is stored in a container file (a .avro file) and its schema (the .avsc file) is stored with the data file. Apache Hive provides support to store a table as Avro and can also query data in this serialisation format.","title":"Data Serialisation and storage"},{"location":"level101/big_data/intro/","text":"Big Data Prerequisites Basics of Linux File systems. Basic understanding of System Design. What to expect from this course This course covers the basics of Big Data and how it has evolved to become what it is today. We will take a look at a few realistic scenarios where Big Data would be a perfect fit. An interesting assignment on designing a Big Data system is followed by understanding the architecture of Hadoop and the tooling around it. What is not covered under this course Writing programs to draw analytics from data. Course Contents Overview of Big Data Usage of Big Data techniques Evolution of Hadoop Architecture of hadoop HDFS Yarn MapReduce framework Other tooling around hadoop Hive Pig Spark Presto Data Serialisation and storage Overview of Big Data Big Data is a collection of large datasets that cannot be processed using traditional computing techniques. It is not a single technique or a tool, rather it has become a complete subject, which involves various tools, techniques, and frameworks. Big Data could consist of Structured data Unstructured data Semi-structured data Characteristics of Big Data: Volume Variety Velocity Variability Examples of Big Data generation include stock exchanges, social media sites, jet engines, etc. Usage of Big Data Techniques Take the example of the traffic lights problem. There are more than 300,000 traffic lights in the US as of 2018. Let us assume that we placed a device on each of them to collect metrics and send it to a central metrics collection system. If each of the IoT devices sends 10 events per minute, we have 300000x10x60x24 = 432x10^7 events per day. How would you go about processing that and telling me how many of the signals were \u201cgreen\u201d at 10:45 am on a particular day? Consider the next example on Unified Payments Interface (UPI) transactions: We had about 1.15 billion UPI transactions in the month of October 2019 in India. If we try to extrapolate this data to about a year and try to find out some common payments that were happening through a particular UPI ID, how do you suggest we go about that?","title":"Introduction"},{"location":"level101/big_data/intro/#big-data","text":"","title":"Big Data"},{"location":"level101/big_data/intro/#prerequisites","text":"Basics of Linux File systems. Basic understanding of System Design.","title":"Prerequisites"},{"location":"level101/big_data/intro/#what-to-expect-from-this-course","text":"This course covers the basics of Big Data and how it has evolved to become what it is today. We will take a look at a few realistic scenarios where Big Data would be a perfect fit. An interesting assignment on designing a Big Data system is followed by understanding the architecture of Hadoop and the tooling around it.","title":"What to expect from this course"},{"location":"level101/big_data/intro/#what-is-not-covered-under-this-course","text":"Writing programs to draw analytics from data.","title":"What is not covered under this course"},{"location":"level101/big_data/intro/#course-contents","text":"Overview of Big Data Usage of Big Data techniques Evolution of Hadoop Architecture of hadoop HDFS Yarn MapReduce framework Other tooling around hadoop Hive Pig Spark Presto Data Serialisation and storage","title":"Course Contents"},{"location":"level101/big_data/intro/#overview-of-big-data","text":"Big Data is a collection of large datasets that cannot be processed using traditional computing techniques. It is not a single technique or a tool, rather it has become a complete subject, which involves various tools, techniques, and frameworks. Big Data could consist of Structured data Unstructured data Semi-structured data Characteristics of Big Data: Volume Variety Velocity Variability Examples of Big Data generation include stock exchanges, social media sites, jet engines, etc.","title":"Overview of Big Data"},{"location":"level101/big_data/intro/#usage-of-big-data-techniques","text":"Take the example of the traffic lights problem. There are more than 300,000 traffic lights in the US as of 2018. Let us assume that we placed a device on each of them to collect metrics and send it to a central metrics collection system. If each of the IoT devices sends 10 events per minute, we have 300000x10x60x24 = 432x10^7 events per day. How would you go about processing that and telling me how many of the signals were \u201cgreen\u201d at 10:45 am on a particular day? Consider the next example on Unified Payments Interface (UPI) transactions: We had about 1.15 billion UPI transactions in the month of October 2019 in India. If we try to extrapolate this data to about a year and try to find out some common payments that were happening through a particular UPI ID, how do you suggest we go about that?","title":"Usage of Big Data Techniques"},{"location":"level101/big_data/tasks/","text":"Tasks and conclusion Post-training tasks: Try setting up your own 3 node Hadoop cluster. A VM based solution can be found here Write a simple spark/MR job of your choice and understand how to generate analytics from data. Sample dataset can be found here References: Hadoop documentation HDFS Architecture YARN Architecture Google GFS paper","title":"Conclusion"},{"location":"level101/big_data/tasks/#tasks-and-conclusion","text":"","title":"Tasks and conclusion"},{"location":"level101/big_data/tasks/#post-training-tasks","text":"Try setting up your own 3 node Hadoop cluster. A VM based solution can be found here Write a simple spark/MR job of your choice and understand how to generate analytics from data. Sample dataset can be found here","title":"Post-training tasks:"},{"location":"level101/big_data/tasks/#references","text":"Hadoop documentation HDFS Architecture YARN Architecture Google GFS paper","title":"References:"},{"location":"level101/databases_nosql/further_reading/","text":"Conclusion We have covered basic concepts of NoSQL databases. There is much more to learn and do. We hope this course gives you a good start and inspires you to explore further. Further reading NoSQL: https://hostingdata.co.uk/nosql-database/ https://www.mongodb.com/nosql-explained https://www.mongodb.com/nosql-explained/nosql-vs-sql Cap Theorem http://www.julianbrowne.com/article/brewers-cap-theorem Scalability http://www.slideshare.net/jboner/scalability-availability-stability-patterns Eventual Consistency https://www.allthingsdistributed.com/2008/12/eventually_consistent.html https://www.toptal.com/big-data/consistent-hashing https://web.stanford.edu/class/cs244/papers/chord_TON_2003.pdf","title":"Conclusion"},{"location":"level101/databases_nosql/further_reading/#conclusion","text":"We have covered basic concepts of NoSQL databases. There is much more to learn and do. We hope this course gives you a good start and inspires you to explore further.","title":"Conclusion"},{"location":"level101/databases_nosql/further_reading/#further-reading","text":"NoSQL: https://hostingdata.co.uk/nosql-database/ https://www.mongodb.com/nosql-explained https://www.mongodb.com/nosql-explained/nosql-vs-sql Cap Theorem http://www.julianbrowne.com/article/brewers-cap-theorem Scalability http://www.slideshare.net/jboner/scalability-availability-stability-patterns Eventual Consistency https://www.allthingsdistributed.com/2008/12/eventually_consistent.html https://www.toptal.com/big-data/consistent-hashing https://web.stanford.edu/class/cs244/papers/chord_TON_2003.pdf","title":"Further reading"},{"location":"level101/databases_nosql/intro/","text":"NoSQL Concepts Prerequisites Relational Databases What to expect from this course At the end of training, you will have an understanding of what a NoSQL database is, what kind of advantages or disadvantages it has over traditional RDBMS, learn about different types of NoSQL databases and understand some of the underlying concepts & trade offs w.r.t to NoSQL. What is not covered under this course We will not be deep diving into any specific NoSQL Database. Course Contents Introduction to NoSQL CAP Theorem Data versioning Partitioning Hashing Quorum Introduction When people use the term \u201cNoSQL database\u201d, they typically use it to refer to any non-relational database. Some say the term \u201cNoSQL\u201d stands for \u201cnon SQL\u201d while others say it stands for \u201cnot only SQL.\u201d Either way, most agree that NoSQL databases are databases that store data in a format other than relational tables. A common misconception is that NoSQL databases or non-relational databases don\u2019t store relationship data well. NoSQL databases can store relationship data\u2014they just store it differently than relational databases do. In fact, when compared with SQL databases, many find modeling relationship data in NoSQL databases to be easier , because related data doesn\u2019t have to be split between tables. Such databases have existed since the late 1960s, but the name \"NoSQL\" was only coined in the early 21st century. NASA used a NoSQL database to track inventory for the Apollo mission. NoSQL databases emerged in the late 2000s as the cost of storage dramatically decreased. Gone were the days of needing to create a complex, difficult-to-manage data model simply for the purposes of reducing data duplication. Developers (rather than storage) were becoming the primary cost of software development, so NoSQL databases optimized for developer productivity. With the rise of Agile development methodology, NoSQL databases were developed with a focus on scaling, fast performance and at the same time allowed for frequent application changes and made programming easier. Types of NoSQL databases: Over time due to the way these NoSQL databases were developed to suit requirements at different companies, we ended up with quite a few types of them. However, they can be broadly classified into 4 types. Some of the databases can overlap between different types. They are Document databases: They store data in documents similar to JSON (JavaScript Object Notation) objects. Each document contains pairs of fields and values. The values can typically be a variety of types including things like strings, numbers, booleans, arrays, or objects, and their structures typically align with objects developers are working with in code. The advantages include intuitive data model & flexible schemas. Because of their variety of field value types and powerful query languages, document databases are great for a wide variety of use cases and can be used as a general purpose database. They can horizontally scale-out to accomodate large data volumes. Ex: MongoDB, Couchbase Key-Value databases: These are a simpler type of databases where each item contains keys and values. A value can typically only be retrieved by referencing its key, so learning how to query for a specific key-value pair is typically simple. Key-value databases are great for use cases where you need to store large amounts of data but you don\u2019t need to perform complex queries to retrieve it. Common use cases include storing user preferences or caching. Ex: Redis , DynamoDB , Voldemort / Venice (Linkedin), Wide-Column stores: They store data in tables, rows, and dynamic columns. Wide-column stores provide a lot of flexibility over relational databases because each row is not required to have the same columns. Many consider wide-column stores to be two-dimensional key-value databases. Wide-column stores are great for when you need to store large amounts of data and you can predict what your query patterns will be. Wide-column stores are commonly used for storing Internet of Things data and user profile data. Cassandra and HBase are two of the most popular wide-column stores. Graph Databases: These databases store data in nodes and edges. Nodes typically store information about people, places, and things while edges store information about the relationships between the nodes. The underlying storage mechanism of graph databases can vary. Some depend on a relational engine and \u201cstore\u201d the graph data in a table (although a table is a logical element, therefore this approach imposes another level of abstraction between the graph database, the graph database management system and the physical devices where the data is actually stored). Others use a key-value store or document-oriented database for storage, making them inherently NoSQL structures. Graph databases excel in use cases where you need to traverse relationships to look for patterns such as social networks, fraud detection, and recommendation engines. Ex: Neo4j Comparison Performance Scalability Flexibility Complexity Functionality Key Value high high high none Variable Document stores high Variable (high) high low Variable (low) Column DB high high moderate low minimal Graph Variable Variable high high Graph theory Differences between SQL and NoSQL The table below summarizes the main differences between SQL and NoSQL databases. SQL Databases NoSQL Databases Data Storage Model Tables with fixed rows and columns Document: JSON documents, Key-value: key-value pairs, Wide-column: tables with rows and dynamic columns, Graph: nodes and edges Primary Purpose General purpose Document: general purpose, Key-value: large amounts of data with simple lookup queries, Wide-column: large amounts of data with predictable query patterns, Graph: analyzing and traversing relationships between connected data Schemas Rigid Flexible Scaling Vertical (scale-up with a larger server) Horizontal (scale-out across commodity servers) Multi-Record ACID Transactions Supported Most do not support multi-record ACID transactions. However, some\u2014like MongoDB\u2014do. Joins Typically required Typically not required Data to Object Mapping Requires ORM (object-relational mapping) Many do not require ORMs. Document DB documents map directly to data structures in most popular programming languages. Advantages Flexible Data Models Most NoSQL systems feature flexible schemas. A flexible schema means you can easily modify your database schema to add or remove fields to support for evolving application requirements. This facilitates with continuous application development of new features without database operation overhead. Horizontal Scaling Most NoSQL systems allow you to scale horizontally, which means you can add in cheaper & commodity hardware, whenever you want to scale a system. On the other hand SQL systems generally scale Vertically (a more powerful server). NoSQL systems can also host huge data sets when compared to traditional SQL systems. Fast Queries NoSQL can generally be a lot faster than traditional SQL systems due to data denormalization and horizontal scaling. Most NoSQL systems also tend to store similar data together facilitating faster query responses. Developer productivity NoSQL systems tend to map data based on the programming data structures. As a result developers need to perform fewer data transformations leading to increased productivity & fewer bugs.","title":"Introduction"},{"location":"level101/databases_nosql/intro/#nosql-concepts","text":"","title":"NoSQL Concepts"},{"location":"level101/databases_nosql/intro/#prerequisites","text":"Relational Databases","title":"Prerequisites"},{"location":"level101/databases_nosql/intro/#what-to-expect-from-this-course","text":"At the end of training, you will have an understanding of what a NoSQL database is, what kind of advantages or disadvantages it has over traditional RDBMS, learn about different types of NoSQL databases and understand some of the underlying concepts & trade offs w.r.t to NoSQL.","title":"What to expect from this course"},{"location":"level101/databases_nosql/intro/#what-is-not-covered-under-this-course","text":"We will not be deep diving into any specific NoSQL Database.","title":"What is not covered under this course"},{"location":"level101/databases_nosql/intro/#course-contents","text":"Introduction to NoSQL CAP Theorem Data versioning Partitioning Hashing Quorum","title":"Course Contents"},{"location":"level101/databases_nosql/intro/#introduction","text":"When people use the term \u201cNoSQL database\u201d, they typically use it to refer to any non-relational database. Some say the term \u201cNoSQL\u201d stands for \u201cnon SQL\u201d while others say it stands for \u201cnot only SQL.\u201d Either way, most agree that NoSQL databases are databases that store data in a format other than relational tables. A common misconception is that NoSQL databases or non-relational databases don\u2019t store relationship data well. NoSQL databases can store relationship data\u2014they just store it differently than relational databases do. In fact, when compared with SQL databases, many find modeling relationship data in NoSQL databases to be easier , because related data doesn\u2019t have to be split between tables. Such databases have existed since the late 1960s, but the name \"NoSQL\" was only coined in the early 21st century. NASA used a NoSQL database to track inventory for the Apollo mission. NoSQL databases emerged in the late 2000s as the cost of storage dramatically decreased. Gone were the days of needing to create a complex, difficult-to-manage data model simply for the purposes of reducing data duplication. Developers (rather than storage) were becoming the primary cost of software development, so NoSQL databases optimized for developer productivity. With the rise of Agile development methodology, NoSQL databases were developed with a focus on scaling, fast performance and at the same time allowed for frequent application changes and made programming easier.","title":"Introduction"},{"location":"level101/databases_nosql/intro/#types-of-nosql-databases","text":"Over time due to the way these NoSQL databases were developed to suit requirements at different companies, we ended up with quite a few types of them. However, they can be broadly classified into 4 types. Some of the databases can overlap between different types. They are Document databases: They store data in documents similar to JSON (JavaScript Object Notation) objects. Each document contains pairs of fields and values. The values can typically be a variety of types including things like strings, numbers, booleans, arrays, or objects, and their structures typically align with objects developers are working with in code. The advantages include intuitive data model & flexible schemas. Because of their variety of field value types and powerful query languages, document databases are great for a wide variety of use cases and can be used as a general purpose database. They can horizontally scale-out to accomodate large data volumes. Ex: MongoDB, Couchbase Key-Value databases: These are a simpler type of databases where each item contains keys and values. A value can typically only be retrieved by referencing its key, so learning how to query for a specific key-value pair is typically simple. Key-value databases are great for use cases where you need to store large amounts of data but you don\u2019t need to perform complex queries to retrieve it. Common use cases include storing user preferences or caching. Ex: Redis , DynamoDB , Voldemort / Venice (Linkedin), Wide-Column stores: They store data in tables, rows, and dynamic columns. Wide-column stores provide a lot of flexibility over relational databases because each row is not required to have the same columns. Many consider wide-column stores to be two-dimensional key-value databases. Wide-column stores are great for when you need to store large amounts of data and you can predict what your query patterns will be. Wide-column stores are commonly used for storing Internet of Things data and user profile data. Cassandra and HBase are two of the most popular wide-column stores. Graph Databases: These databases store data in nodes and edges. Nodes typically store information about people, places, and things while edges store information about the relationships between the nodes. The underlying storage mechanism of graph databases can vary. Some depend on a relational engine and \u201cstore\u201d the graph data in a table (although a table is a logical element, therefore this approach imposes another level of abstraction between the graph database, the graph database management system and the physical devices where the data is actually stored). Others use a key-value store or document-oriented database for storage, making them inherently NoSQL structures. Graph databases excel in use cases where you need to traverse relationships to look for patterns such as social networks, fraud detection, and recommendation engines. Ex: Neo4j","title":"Types of NoSQL databases:"},{"location":"level101/databases_nosql/intro/#comparison","text":"Performance Scalability Flexibility Complexity Functionality Key Value high high high none Variable Document stores high Variable (high) high low Variable (low) Column DB high high moderate low minimal Graph Variable Variable high high Graph theory","title":"Comparison"},{"location":"level101/databases_nosql/intro/#differences-between-sql-and-nosql","text":"The table below summarizes the main differences between SQL and NoSQL databases. SQL Databases NoSQL Databases Data Storage Model Tables with fixed rows and columns Document: JSON documents, Key-value: key-value pairs, Wide-column: tables with rows and dynamic columns, Graph: nodes and edges Primary Purpose General purpose Document: general purpose, Key-value: large amounts of data with simple lookup queries, Wide-column: large amounts of data with predictable query patterns, Graph: analyzing and traversing relationships between connected data Schemas Rigid Flexible Scaling Vertical (scale-up with a larger server) Horizontal (scale-out across commodity servers) Multi-Record ACID Transactions Supported Most do not support multi-record ACID transactions. However, some\u2014like MongoDB\u2014do. Joins Typically required Typically not required Data to Object Mapping Requires ORM (object-relational mapping) Many do not require ORMs. Document DB documents map directly to data structures in most popular programming languages.","title":"Differences between SQL and NoSQL"},{"location":"level101/databases_nosql/intro/#advantages","text":"Flexible Data Models Most NoSQL systems feature flexible schemas. A flexible schema means you can easily modify your database schema to add or remove fields to support for evolving application requirements. This facilitates with continuous application development of new features without database operation overhead. Horizontal Scaling Most NoSQL systems allow you to scale horizontally, which means you can add in cheaper & commodity hardware, whenever you want to scale a system. On the other hand SQL systems generally scale Vertically (a more powerful server). NoSQL systems can also host huge data sets when compared to traditional SQL systems. Fast Queries NoSQL can generally be a lot faster than traditional SQL systems due to data denormalization and horizontal scaling. Most NoSQL systems also tend to store similar data together facilitating faster query responses. Developer productivity NoSQL systems tend to map data based on the programming data structures. As a result developers need to perform fewer data transformations leading to increased productivity & fewer bugs.","title":"Advantages"},{"location":"level101/databases_nosql/key_concepts/","text":"Key Concepts Lets looks at some of the key concepts when we talk about NoSQL or distributed systems CAP Theorem In a keynote titled \u201c Towards Robust Distributed Systems \u201d at ACM\u2019s PODC symposium in 2000 Eric Brewer came up with the so-called CAP-theorem which is widely adopted today by large web companies as well as in the NoSQL community. The CAP acronym stands for C onsistency, A vailability & P artition Tolerance. Consistency It refers to how consistent a system is after an execution. A distributed system is called consistent when a write made by a source is available for all readers of that shared data. Different NoSQL systems support different levels of consistency. Availability It refers to how a system responds to loss of functionality of different systems due to hardware and software failures. A high availability implies that a system is still available to handle operations (reads and writes) when a certain part of the system is down due to a failure or upgrade. Partition Tolerance It is the ability of the system to continue operations in the event of a network partition. A network partition occurs when a failure causes two or more islands of networks where the systems can\u2019t talk to each other across the islands temporarily or permanently. Brewer alleges that one can at most choose two of these three characteristics in a shared-data system. The CAP-theorem states that a choice can only be made for two options out of consistency, availability and partition tolerance. A growing number of use cases in large scale applications tend to value reliability implying that availability & redundancy are more valuable than consistency. As a result these systems struggle to meet ACID properties. They attain this by loosening on the consistency requirement i.e Eventual Consistency. Eventual Consistency means that all readers will see writes, as time goes on: \u201cIn a steady state, the system will eventually return the last written value\u201d. Clients therefore may face an inconsistent state of data as updates are in progress. For instance, in a replicated database updates may go to one node which replicates the latest version to all other nodes that contain a replica of the modified dataset so that the replica nodes eventually will have the latest version. NoSQL systems support different levels of eventual consistency models. For example: Read Your Own Writes Consistency Clients will see their updates immediately after they are written. The reads can hit nodes other than the one where it was written. However they might not see updates by other clients immediately. Session Consistency Clients will see the updates to their data within a session scope. This generally indicates that reads & writes occur on the same server. Other clients using the same nodes will receive the same updates. Casual Consistency A system provides causal consistency if the following condition holds: write operations that are related by potential causality are seen by each process of the system in order. Different processes may observe concurrent writes in different orders Eventual consistency is useful if concurrent updates of the same partitions of data are unlikely and if clients do not immediately depend on reading updates issued by themselves or by other clients. Depending on what consistency model was chosen for the system (or parts of it), determines where the requests are routed, ex: replicas. CAP alternatives illustration Choice Traits Examples Consistency + Availability (Forfeit Partitions) 2-phase commits Cache invalidation protocols Single-site databases Cluster databases LDAP xFS file system Consistency + Partition tolerance (Forfeit Availability) Pessimistic locking Make minority partitions unavailable Distributed databases Distributed locking Majority protocols Availability + Partition tolerance (Forfeit Consistency) expirations/leases conflict resolution optimistic DNS Web caching Versioning of Data in distributed systems When data is distributed across nodes, it can be modified on different nodes at the same time (assuming strict consistency is enforced). Questions arise on conflict resolution for concurrent updates. Some of the popular conflict resolution mechanism are Timestamps This is the most obvious solution. You sort updates based on chronological order and choose the latest update. However this relies on clock synchronization across different parts of the infrastructure. This gets even more complicated when parts of systems are spread across different geographic locations. Optimistic Locking You associate a unique value like a clock or counter with every data update. When a client wants to update data, it has to specify which version of data needs to be updated. This would mean you need to keep track of history of the data versions. Vector Clocks A vector clock is defined as a tuple of clock values from each node. In a distributed environment, each node maintains a tuple of such clock values which represent the state of the nodes itself and its peers/replicas. A clock value may be real timestamps derived from local clock or version no. Vector clocks illustration Vector clocks have the following advantages over other conflict resolution mechanism No dependency on synchronized clocks No total ordering of revision nos required for casual reasoning No need to store and maintain multiple versions of the data on different nodes.** ** Partitioning When the amount of data crosses the capacity of a single node, we need to think of splitting data, creating replicas for load balancing & disaster recovery. Depending on how dynamic the infrastructure is, we have a few approaches that we can take. Memory cached These are partitioned in-memory databases that are primarily used for transient data. These databases are generally used as a front for traditional RDBMS. Most frequently used data is replicated from a rdbms into a memory database to facilitate fast queries and to take the load off from backend DB\u2019s. A very common example is memcached or couchbase. Clustering Traditional cluster mechanisms abstract away the cluster topology from clients. A client need not know where the actual data is residing and which node it is talking to. Clustering is very commonly used in traditional RDBMS where it can help scaling the persistent layer to a certain extent. Separating reads from writes In this method, you will have multiple replicas hosting the same data. The incoming writes are typically sent to a single node (Leader) or multiple nodes (multi-Leader), while the rest of the replicas (Follower) handle reads requests. The leader replicates writes asynchronously to all followers. However the write lag can\u2019t be completely avoided. Sometimes a leader can crash before it replicates all the data to a follower. When this happens, a follower with the most consistent data can be turned into a leader. As you can realize now, it is hard to enforce full consistency in this model. You also need to consider the ratio of read vs write traffic. This model won\u2019t make sense when writes are higher than reads. The replication methods can also vary widely. Some systems do a complete transfer of state periodically, while others use a delta state transfer approach. You could also transfer the state by transferring the operations in order. The followers can then apply the same operations as the leader to catch up. Sharding Sharing refers to dividing data in such a way that data is distributed evenly (both in terms of storage & processing power) across a cluster of nodes. It can also imply data locality, which means similar & related data is stored together to facilitate faster access. A shard in turn can be further replicated to meet load balancing or disaster recovery requirements. A single shard replica might take in all writes (single leader) or multiple replicas can take writes (multi-leader). Reads can be distributed across multiple replicas. Since data is now distributed across multiple nodes, clients should be able to consistently figure out where data is hosted. We will look at some of the common techniques below. The downside of sharding is that joins between shards is not possible. So an upstream/downstream application has to aggregate the results from multiple shards. Sharding example Hashing A hash function is a function that maps one piece of data\u2014typically describing some kind of object, often of arbitrary size\u2014to another piece of data, typically an integer, known as hash code , or simply hash . In a partitioned database, it is important to consistently map a key to a server/replica. For ex: you can use a very simple hash as a modulo function. _p = k mod n_ Where p -> partition, k -> primary key n -> no of nodes The downside of this simple hash is that, whenever the cluster topology changes, the data distribution also changes. When you are dealing with memory caches, it will be easy to distribute partitions around. Whenever a node joins/leaves a topology, partitions can reorder themselves, a cache miss can be re-populated from backend DB. However when you look at persistent data, it is not possible as the new node doesn\u2019t have the data needed to serve it. This brings us to consistent hashing. Consistent Hashing Consistent hashing is a distributed hashing scheme that operates independently of the number of servers or objects in a distributed hash table by assigning them a position on an abstract circle, or hash ring . This allows servers and objects to scale without affecting the overall system. Say that our hash function h() generates a 32-bit integer. Then, to determine to which server we will send a key k, we find the server s whose hash h(s) is the smallest integer that is larger than h(k). To make the process simpler, we assume the table is circular, which means that if we cannot find a server with a hash larger than h(k), we wrap around and start looking from the beginning of the array. Consistent hashing illustration In consistent hashing when a server is removed or added then only the keys from that server are relocated. For example, if server S3 is removed then, all keys from server S3 will be moved to server S4 but keys stored on server S4 and S2 are not relocated. But there is one problem, when server S3 is removed then keys from S3 are not equally distributed among remaining servers S4 and S2. They are only assigned to server S4 which increases the load on server S4. To evenly distribute the load among servers when a server is added or removed, it creates a fixed number of replicas ( known as virtual nodes) of each server and distributes it along the circle. So instead of server labels S1, S2 and S3, we will have S10 S11\u2026S19, S20 S21\u2026S29 and S30 S31\u2026S39. The factor for a number of replicas is also known as weight , depending on the situation. All keys which are mapped to replicas Sij are stored on server Si. To find a key we do the same thing, find the position of the key on the circle and then move forward until you find a server replica. If the server replica is Sij then the key is stored in server Si. Suppose server S3 is removed, then all S3 replicas with labels S30 S31 \u2026 S39 must be removed. Now the objects keys adjacent to S3X labels will be automatically re-assigned to S1X, S2X and S4X. All keys originally assigned to S1, S2 & S4 will not be moved. Similar things happen if we add a server. Suppose we want to add a server S5 as a replacement of S3 then we need to add labels S50 S51 \u2026 S59. In the ideal case, one-fourth of keys from S1, S2 and S4 will be reassigned to S5. When applied to persistent storages, further issues arise: if a node has left the scene, data stored on this node becomes unavailable, unless it has been replicated to other nodes before; in the opposite case of a new node joining the others, adjacent nodes are no longer responsible for some pieces of data which they still store but not get asked for anymore as the corresponding objects are no longer hashed to them by requesting clients. In order to address this issue, a replication factor (r) can be introduced. Introducing replicas in a partitioning scheme\u2014besides reliability benefits\u2014also makes it possible to spread workload for read requests that can go to any physical node responsible for a requested piece of data. Scalability doesn\u2019t work if the clients have to decide between multiple versions of the dataset, because they need to read from a quorum of servers which in turn reduces the efficiency of load balancing. Quorum Quorum is the minimum number of nodes in a cluster that must be online and be able to communicate with each other. If any additional node failure occurs beyond this threshold, the cluster will stop running. To attain a quorum, you need a majority of the nodes. Commonly it is (N/2 + 1), where N is the total no of nodes in the system. For ex, In a 3 node cluster, you need 2 nodes for a majority, In a 5 node cluster, you need 3 nodes for a majority, In a 6 node cluster, you need 4 nodes for a majority. Quorum example Network problems can cause communication failures among cluster nodes. One set of nodes might be able to communicate together across a functioning part of a network but not be able to communicate with a different set of nodes in another part of the network. This is known as split brain in cluster or cluster partitioning. Now the partition which has quorum is allowed to continue running the application. The other partitions are removed from the cluster. Eg: In a 5 node cluster, consider what happens if nodes 1, 2, and 3 can communicate with each other but not with nodes 4 and 5. Nodes 1, 2, and 3 constitute a majority, and they continue running as a cluster. Nodes 4 and 5, being a minority, stop running as a cluster. If node 3 loses communication with other nodes, all nodes stop running as a cluster. However, all functioning nodes will continue to listen for communication, so that when the network begins working again, the cluster can form and begin to run. Below diagram demonstrates Quorum selection on a cluster partitioned into two sets. Cluster Quorum example","title":"Key Concepts"},{"location":"level101/databases_nosql/key_concepts/#key-concepts","text":"Lets looks at some of the key concepts when we talk about NoSQL or distributed systems","title":"Key Concepts"},{"location":"level101/databases_nosql/key_concepts/#cap-theorem","text":"In a keynote titled \u201c Towards Robust Distributed Systems \u201d at ACM\u2019s PODC symposium in 2000 Eric Brewer came up with the so-called CAP-theorem which is widely adopted today by large web companies as well as in the NoSQL community. The CAP acronym stands for C onsistency, A vailability & P artition Tolerance. Consistency It refers to how consistent a system is after an execution. A distributed system is called consistent when a write made by a source is available for all readers of that shared data. Different NoSQL systems support different levels of consistency. Availability It refers to how a system responds to loss of functionality of different systems due to hardware and software failures. A high availability implies that a system is still available to handle operations (reads and writes) when a certain part of the system is down due to a failure or upgrade. Partition Tolerance It is the ability of the system to continue operations in the event of a network partition. A network partition occurs when a failure causes two or more islands of networks where the systems can\u2019t talk to each other across the islands temporarily or permanently. Brewer alleges that one can at most choose two of these three characteristics in a shared-data system. The CAP-theorem states that a choice can only be made for two options out of consistency, availability and partition tolerance. A growing number of use cases in large scale applications tend to value reliability implying that availability & redundancy are more valuable than consistency. As a result these systems struggle to meet ACID properties. They attain this by loosening on the consistency requirement i.e Eventual Consistency. Eventual Consistency means that all readers will see writes, as time goes on: \u201cIn a steady state, the system will eventually return the last written value\u201d. Clients therefore may face an inconsistent state of data as updates are in progress. For instance, in a replicated database updates may go to one node which replicates the latest version to all other nodes that contain a replica of the modified dataset so that the replica nodes eventually will have the latest version. NoSQL systems support different levels of eventual consistency models. For example: Read Your Own Writes Consistency Clients will see their updates immediately after they are written. The reads can hit nodes other than the one where it was written. However they might not see updates by other clients immediately. Session Consistency Clients will see the updates to their data within a session scope. This generally indicates that reads & writes occur on the same server. Other clients using the same nodes will receive the same updates. Casual Consistency A system provides causal consistency if the following condition holds: write operations that are related by potential causality are seen by each process of the system in order. Different processes may observe concurrent writes in different orders Eventual consistency is useful if concurrent updates of the same partitions of data are unlikely and if clients do not immediately depend on reading updates issued by themselves or by other clients. Depending on what consistency model was chosen for the system (or parts of it), determines where the requests are routed, ex: replicas. CAP alternatives illustration Choice Traits Examples Consistency + Availability (Forfeit Partitions) 2-phase commits Cache invalidation protocols Single-site databases Cluster databases LDAP xFS file system Consistency + Partition tolerance (Forfeit Availability) Pessimistic locking Make minority partitions unavailable Distributed databases Distributed locking Majority protocols Availability + Partition tolerance (Forfeit Consistency) expirations/leases conflict resolution optimistic DNS Web caching","title":"CAP Theorem"},{"location":"level101/databases_nosql/key_concepts/#versioning-of-data-in-distributed-systems","text":"When data is distributed across nodes, it can be modified on different nodes at the same time (assuming strict consistency is enforced). Questions arise on conflict resolution for concurrent updates. Some of the popular conflict resolution mechanism are Timestamps This is the most obvious solution. You sort updates based on chronological order and choose the latest update. However this relies on clock synchronization across different parts of the infrastructure. This gets even more complicated when parts of systems are spread across different geographic locations. Optimistic Locking You associate a unique value like a clock or counter with every data update. When a client wants to update data, it has to specify which version of data needs to be updated. This would mean you need to keep track of history of the data versions. Vector Clocks A vector clock is defined as a tuple of clock values from each node. In a distributed environment, each node maintains a tuple of such clock values which represent the state of the nodes itself and its peers/replicas. A clock value may be real timestamps derived from local clock or version no. Vector clocks illustration Vector clocks have the following advantages over other conflict resolution mechanism No dependency on synchronized clocks No total ordering of revision nos required for casual reasoning No need to store and maintain multiple versions of the data on different nodes.** **","title":"Versioning of Data in distributed systems"},{"location":"level101/databases_nosql/key_concepts/#partitioning","text":"When the amount of data crosses the capacity of a single node, we need to think of splitting data, creating replicas for load balancing & disaster recovery. Depending on how dynamic the infrastructure is, we have a few approaches that we can take. Memory cached These are partitioned in-memory databases that are primarily used for transient data. These databases are generally used as a front for traditional RDBMS. Most frequently used data is replicated from a rdbms into a memory database to facilitate fast queries and to take the load off from backend DB\u2019s. A very common example is memcached or couchbase. Clustering Traditional cluster mechanisms abstract away the cluster topology from clients. A client need not know where the actual data is residing and which node it is talking to. Clustering is very commonly used in traditional RDBMS where it can help scaling the persistent layer to a certain extent. Separating reads from writes In this method, you will have multiple replicas hosting the same data. The incoming writes are typically sent to a single node (Leader) or multiple nodes (multi-Leader), while the rest of the replicas (Follower) handle reads requests. The leader replicates writes asynchronously to all followers. However the write lag can\u2019t be completely avoided. Sometimes a leader can crash before it replicates all the data to a follower. When this happens, a follower with the most consistent data can be turned into a leader. As you can realize now, it is hard to enforce full consistency in this model. You also need to consider the ratio of read vs write traffic. This model won\u2019t make sense when writes are higher than reads. The replication methods can also vary widely. Some systems do a complete transfer of state periodically, while others use a delta state transfer approach. You could also transfer the state by transferring the operations in order. The followers can then apply the same operations as the leader to catch up. Sharding Sharing refers to dividing data in such a way that data is distributed evenly (both in terms of storage & processing power) across a cluster of nodes. It can also imply data locality, which means similar & related data is stored together to facilitate faster access. A shard in turn can be further replicated to meet load balancing or disaster recovery requirements. A single shard replica might take in all writes (single leader) or multiple replicas can take writes (multi-leader). Reads can be distributed across multiple replicas. Since data is now distributed across multiple nodes, clients should be able to consistently figure out where data is hosted. We will look at some of the common techniques below. The downside of sharding is that joins between shards is not possible. So an upstream/downstream application has to aggregate the results from multiple shards. Sharding example","title":"Partitioning"},{"location":"level101/databases_nosql/key_concepts/#hashing","text":"A hash function is a function that maps one piece of data\u2014typically describing some kind of object, often of arbitrary size\u2014to another piece of data, typically an integer, known as hash code , or simply hash . In a partitioned database, it is important to consistently map a key to a server/replica. For ex: you can use a very simple hash as a modulo function. _p = k mod n_ Where p -> partition, k -> primary key n -> no of nodes The downside of this simple hash is that, whenever the cluster topology changes, the data distribution also changes. When you are dealing with memory caches, it will be easy to distribute partitions around. Whenever a node joins/leaves a topology, partitions can reorder themselves, a cache miss can be re-populated from backend DB. However when you look at persistent data, it is not possible as the new node doesn\u2019t have the data needed to serve it. This brings us to consistent hashing.","title":"Hashing"},{"location":"level101/databases_nosql/key_concepts/#consistent-hashing","text":"Consistent hashing is a distributed hashing scheme that operates independently of the number of servers or objects in a distributed hash table by assigning them a position on an abstract circle, or hash ring . This allows servers and objects to scale without affecting the overall system. Say that our hash function h() generates a 32-bit integer. Then, to determine to which server we will send a key k, we find the server s whose hash h(s) is the smallest integer that is larger than h(k). To make the process simpler, we assume the table is circular, which means that if we cannot find a server with a hash larger than h(k), we wrap around and start looking from the beginning of the array. Consistent hashing illustration In consistent hashing when a server is removed or added then only the keys from that server are relocated. For example, if server S3 is removed then, all keys from server S3 will be moved to server S4 but keys stored on server S4 and S2 are not relocated. But there is one problem, when server S3 is removed then keys from S3 are not equally distributed among remaining servers S4 and S2. They are only assigned to server S4 which increases the load on server S4. To evenly distribute the load among servers when a server is added or removed, it creates a fixed number of replicas ( known as virtual nodes) of each server and distributes it along the circle. So instead of server labels S1, S2 and S3, we will have S10 S11\u2026S19, S20 S21\u2026S29 and S30 S31\u2026S39. The factor for a number of replicas is also known as weight , depending on the situation. All keys which are mapped to replicas Sij are stored on server Si. To find a key we do the same thing, find the position of the key on the circle and then move forward until you find a server replica. If the server replica is Sij then the key is stored in server Si. Suppose server S3 is removed, then all S3 replicas with labels S30 S31 \u2026 S39 must be removed. Now the objects keys adjacent to S3X labels will be automatically re-assigned to S1X, S2X and S4X. All keys originally assigned to S1, S2 & S4 will not be moved. Similar things happen if we add a server. Suppose we want to add a server S5 as a replacement of S3 then we need to add labels S50 S51 \u2026 S59. In the ideal case, one-fourth of keys from S1, S2 and S4 will be reassigned to S5. When applied to persistent storages, further issues arise: if a node has left the scene, data stored on this node becomes unavailable, unless it has been replicated to other nodes before; in the opposite case of a new node joining the others, adjacent nodes are no longer responsible for some pieces of data which they still store but not get asked for anymore as the corresponding objects are no longer hashed to them by requesting clients. In order to address this issue, a replication factor (r) can be introduced. Introducing replicas in a partitioning scheme\u2014besides reliability benefits\u2014also makes it possible to spread workload for read requests that can go to any physical node responsible for a requested piece of data. Scalability doesn\u2019t work if the clients have to decide between multiple versions of the dataset, because they need to read from a quorum of servers which in turn reduces the efficiency of load balancing.","title":"Consistent Hashing"},{"location":"level101/databases_nosql/key_concepts/#quorum","text":"Quorum is the minimum number of nodes in a cluster that must be online and be able to communicate with each other. If any additional node failure occurs beyond this threshold, the cluster will stop running. To attain a quorum, you need a majority of the nodes. Commonly it is (N/2 + 1), where N is the total no of nodes in the system. For ex, In a 3 node cluster, you need 2 nodes for a majority, In a 5 node cluster, you need 3 nodes for a majority, In a 6 node cluster, you need 4 nodes for a majority. Quorum example Network problems can cause communication failures among cluster nodes. One set of nodes might be able to communicate together across a functioning part of a network but not be able to communicate with a different set of nodes in another part of the network. This is known as split brain in cluster or cluster partitioning. Now the partition which has quorum is allowed to continue running the application. The other partitions are removed from the cluster. Eg: In a 5 node cluster, consider what happens if nodes 1, 2, and 3 can communicate with each other but not with nodes 4 and 5. Nodes 1, 2, and 3 constitute a majority, and they continue running as a cluster. Nodes 4 and 5, being a minority, stop running as a cluster. If node 3 loses communication with other nodes, all nodes stop running as a cluster. However, all functioning nodes will continue to listen for communication, so that when the network begins working again, the cluster can form and begin to run. Below diagram demonstrates Quorum selection on a cluster partitioned into two sets. Cluster Quorum example","title":"Quorum"},{"location":"level101/databases_sql/backup_recovery/","text":"Backup and Recovery Backups are a very crucial part of any database setup. They are generally a copy of the data that can be used to reconstruct the data in case of any major or minor crisis with the database. In general terms backups can be of two types:- Physical Backup - the data directory as it is on the disk Logical Backup - the table structure and records in it Both the above kinds of backups are supported by MySQL with different tools. It is the job of the SRE to identify which should be used when. Mysqldump This utility is available with MySQL installation. It helps in getting the logical backup of the database. It outputs a set of SQL statements to reconstruct the data. It is not recommended to use mysqldump for large tables as it might take a lot of time and the file size will be huge. However, for small tables it is the best and the quickest option. mysqldump [options] > dump_output.sql There are certain options that can be used with mysqldump to get an appropriate dump of the database. To dump all the databases mysqldump -u -p --all-databases > all_dbs.sql To dump specific databases mysqldump -u -p --databases db1 db2 db3 > dbs.sql To dump a single database mysqldump -u -p --databases db1 > db1.sql OR mysqldump -u -p db1 > db1.sql The difference between the above two commands is that the latter one does not contain the CREATE DATABASE command in the backup output. To dump specific tables in a database mysqldump -u -p db1 table1 table2 > db1_tables.sql To dump only table structures and no data mysqldump -u -p --no-data db1 > db1_structure.sql To dump only table data and no CREATE statements mysqldump -u -p --no-create-info db1 > db1_data.sql To dump only specific records from a table mysqldump -u -p --no-create-info db1 table1 --where=\u201dsalary>80000\u201d > db1_table1_80000.sql Mysqldump can also provide output in CSV, other delimited text or XML format to support use-cases if any. The backup from mysqldump utility is offline i.e. when the backup finishes it will not have the changes to the database which were made when the backup was going on. For example, if the backup started at 3 PM and finished at 4 PM, it will not have the changes made to the database between 3 and 4 PM. Restoring from mysqldump can be done in the following two ways:- From shell mysql -u -p < all_dbs.sql OR From shell if the database is already created mysql -u -p db1 < db1.sql From within MySQL shell mysql> source all_dbs.sql Percona Xtrabackup This utility is installed separately from the MySQL server and is open source, provided by Percona. It helps in getting the full or partial physical backup of the database. It provides online backup of the database i.e. it will have the changes made to the database when the backup was going on as explained at the end of the previous section. Full Backup - the complete backup of the database. Partial Backup - Incremental Cumulative - After one full backup, the next backups will have changes post the full backup. For example, we took a full backup on Sunday, from Monday onwards every backup will have changes after Sunday; so, Tuesday\u2019s backup will have Monday\u2019s changes as well, Wednesday\u2019s backup will have changes of Monday and Tuesday as well and so on. Differential - After one full backup, the next backups will have changes post the previous incremental backup. For example, we took a full backup on Sunday, Monday will have changes done after Sunday, Tuesday will have changes done after Monday, and so on. Percona xtrabackup allows us to get both full and incremental backups as we desire. However, incremental backups take less space than a full backup (if taken per day) but the restore time of incremental backups is more than that of full backups. Creating a full backup xtrabackup --defaults-file= --user= --password= --backup --target-dir= Example xtrabackup --defaults-file=/etc/my.cnf --user=some_user --password=XXXX --backup --target-dir=/mnt/data/backup/ Some other options --stream - can be used to stream the backup files to standard output in a specified format. xbstream is the only option for now. --tmp-dir - set this to a tmp directory to be used for temporary files while taking backups. --parallel - set this to the number of threads that can be used to parallely copy data files to target directory. --compress - by default - quicklz is used. Set this to have the backup in compressed format. Each file is a .qp compressed file and can be extracted by qpress file archiver. --decompress - decompresses all the files which were compressed with the .qp extension. It will not delete the .qp files after decompression. To do that, use --remove-original along with this. Please note that the decompress option should be run separately from the xtrabackup command that used the compress option. Preparing a backup Once the backup is done with the --backup option, we need to prepare it in order to restore it. This is done to make the datafiles consistent with point-in-time. There might have been some transactions going on while the backup was being executed and those have changed the data files. When we prepare a backup, all those transactions are applied to the data files. xtrabackup --prepare --target-dir= Example xtrabackup --prepare --target-dir=/mnt/data/backup/ It is not recommended to halt a process which is preparing the backup as that might cause data file corruption and backup cannot be used further. The backup will have to be taken again. Restoring a Full Backup To restore the backup which is created and prepared from above commands, just copy everything from the backup target-dir to the data-dir of MySQL server, change the ownership of all files to mysql user (the linux user used by MySQL server) and start mysql. Or the below command can be used as well, xtrabackup --defaults-file=/etc/my.cnf --copy-back --target-dir=/mnt/data/backups/ Note - the backup has to be prepared in order to restore it. Creating Incremental backups Percona Xtrabackup helps create incremental backups, i.e only the changes can be backed up since the last backup. Every InnoDB page contains a log sequence number or LSN that is also mentioned as one of the last lines of backup and prepare commands. xtrabackup: Transaction log of lsn to was copied. OR InnoDB: Shutdown completed; log sequence number completed OK! This indicates that the backup has been taken till the log sequence number mentioned. This is a key information in understanding incremental backups and working towards automating one. Incremental backups do not compare data files for changes, instead, they go through the InnoDB pages and compare their LSN to the last backup\u2019s LSN. So, without one full backup, the incremental backups are useless. The xtrabackup command creates a xtrabackup_checkpoint file which has the information about the LSN of the backup. Below are the key contents of the file:- backup_type = full-backuped | incremental from_lsn = 0 (full backup) | to_lsn of last backup to_lsn = last_lsn = There is a difference between to_lsn and last_lsn . When the last_lsn is more than to_lsn that means there are transactions that ran while we took the backup and are yet to be applied. That is what --prepare is used for. To take incremental backups, first, we require one full backup. xtrabackup --defaults-file=/etc/my.cnf --user=some_user --password=XXXX --backup --target-dir=/mnt/data/backup/full/ Let\u2019s assume the contents of the xtrabackup_checkpoint file to be as follows. backup_type = full-backuped from_lsn = 0 to_lsn = 1000 last_lsn = 1000 Now that we have one full backup, we can have an incremental backup that takes the changes. We will go with differential incremental backups. xtrabackup --defaults-file=/etc/my.cnf --user=some_user --password=XXXX --backup --target-dir=/mnt/data/backup/incr1/ --incremental-basedir=/mnt/data/backup/full/ There are delta files created in the incr1 directory like, ibdata1.delta , db1/tbl1.ibd.delta with the changes from the full directory. The xtrabackup_checkpoint file will thus have the following contents. backup_type = incremental from_lsn = 1000 to_lsn = 1500 last_lsn = 1500 Hence, the from_lsn here is equal to the to_lsn of the last backup or the basedir provided for the incremental backups. For the next incremental backup we can use this incremental backup as the basedir. xtrabackup --defaults-file=/etc/my.cnf --user=some_user --password=XXXX --backup --target-dir=/mnt/data/backup/incr2/ --incremental-basedir=/mnt/data/backup/incr1/ The xtrabackup_checkpoint file will thus have the following contents. backup_type = incremental from_lsn = 1500 to_lsn = 2000 last_lsn = 2200 Preparing Incremental backups Preparing incremental backups is not the same as preparing a full backup. When prepare runs, two operations are performed - committed transactions are applied on the data files and uncommitted transactions are rolled back . While preparing incremental backups, we have to skip rollback of uncommitted transactions as it is likely that they might get committed in the next incremental backup. If we rollback uncommitted transactions the further incremental backups cannot be applied. We use --apply-log-only option along with --prepare to avoid the rollback phase. From the last section, we had the following directories with complete backup /mnt/data/backup/full /mnt/data/backup/incr1 /mnt/data/backup/incr2 First, we prepare the full backup, but only with the --apply-log-only option. xtrabackup --prepare --apply-log-only --target-dir=/mnt/data/backup/full The output of the command will contain the following at the end. InnoDB: Shutdown complete; log sequence number 1000 Completed OK! Note the LSN mentioned at the end is the same as the to_lsn from the xtrabackup_checkpoint created for full backup. Next, we apply the changes from the first incremental backup to the full backup. xtrabackup --prepare --apply-log-only --target-dir=/mnt/data/backup/full --incremental-dir=/mnt/data/backup/incr1 This applies the delta files in the incremental directory to the full backup directory. It rolls the data files in the full backup directory forward to the time of incremental backup and applies the redo logs as usual. Lastly, we apply the last incremental backup same as the previous one with just a small change. xtrabackup --prepare --target-dir=/mnt/data/backup/full --incremental-dir=/mnt/data/backup/incr1 We do not have to use the --apply-log-only option with it. It applies the incr2 delta files to the full backup data files taking them forward, applies redo logs on them and finally rollbacks the uncommitted transactions to produce the final result. The data now present in the full backup directory can now be used to restore. Note - To create cumulative incremental backups, the incremental-basedir should always be the full backup directory for every incremental backup. While preparing, we can start with the full backup with the --apply-log-only option and use just the last incremental backup for the final --prepare as that has all the changes since the full backup. Restoring Incremental backups Once all the above steps are completed, restoring is the same as done for a full backup. Further Reading MySQL Point-In-Time-Recovery Another MySQL backup tool - mysqlpump Another MySQL backup tool - mydumper A comparison between mysqldump, mysqlpump and mydumper Backup Best Practices","title":"Backup and Recovery"},{"location":"level101/databases_sql/backup_recovery/#backup-and-recovery","text":"Backups are a very crucial part of any database setup. They are generally a copy of the data that can be used to reconstruct the data in case of any major or minor crisis with the database. In general terms backups can be of two types:- Physical Backup - the data directory as it is on the disk Logical Backup - the table structure and records in it Both the above kinds of backups are supported by MySQL with different tools. It is the job of the SRE to identify which should be used when.","title":"Backup and Recovery"},{"location":"level101/databases_sql/backup_recovery/#mysqldump","text":"This utility is available with MySQL installation. It helps in getting the logical backup of the database. It outputs a set of SQL statements to reconstruct the data. It is not recommended to use mysqldump for large tables as it might take a lot of time and the file size will be huge. However, for small tables it is the best and the quickest option. mysqldump [options] > dump_output.sql There are certain options that can be used with mysqldump to get an appropriate dump of the database. To dump all the databases mysqldump -u -p --all-databases > all_dbs.sql To dump specific databases mysqldump -u -p --databases db1 db2 db3 > dbs.sql To dump a single database mysqldump -u -p --databases db1 > db1.sql OR mysqldump -u -p db1 > db1.sql The difference between the above two commands is that the latter one does not contain the CREATE DATABASE command in the backup output. To dump specific tables in a database mysqldump -u -p db1 table1 table2 > db1_tables.sql To dump only table structures and no data mysqldump -u -p --no-data db1 > db1_structure.sql To dump only table data and no CREATE statements mysqldump -u -p --no-create-info db1 > db1_data.sql To dump only specific records from a table mysqldump -u -p --no-create-info db1 table1 --where=\u201dsalary>80000\u201d > db1_table1_80000.sql Mysqldump can also provide output in CSV, other delimited text or XML format to support use-cases if any. The backup from mysqldump utility is offline i.e. when the backup finishes it will not have the changes to the database which were made when the backup was going on. For example, if the backup started at 3 PM and finished at 4 PM, it will not have the changes made to the database between 3 and 4 PM. Restoring from mysqldump can be done in the following two ways:- From shell mysql -u -p < all_dbs.sql OR From shell if the database is already created mysql -u -p db1 < db1.sql From within MySQL shell mysql> source all_dbs.sql","title":"Mysqldump"},{"location":"level101/databases_sql/backup_recovery/#percona-xtrabackup","text":"This utility is installed separately from the MySQL server and is open source, provided by Percona. It helps in getting the full or partial physical backup of the database. It provides online backup of the database i.e. it will have the changes made to the database when the backup was going on as explained at the end of the previous section. Full Backup - the complete backup of the database. Partial Backup - Incremental Cumulative - After one full backup, the next backups will have changes post the full backup. For example, we took a full backup on Sunday, from Monday onwards every backup will have changes after Sunday; so, Tuesday\u2019s backup will have Monday\u2019s changes as well, Wednesday\u2019s backup will have changes of Monday and Tuesday as well and so on. Differential - After one full backup, the next backups will have changes post the previous incremental backup. For example, we took a full backup on Sunday, Monday will have changes done after Sunday, Tuesday will have changes done after Monday, and so on. Percona xtrabackup allows us to get both full and incremental backups as we desire. However, incremental backups take less space than a full backup (if taken per day) but the restore time of incremental backups is more than that of full backups. Creating a full backup xtrabackup --defaults-file= --user= --password= --backup --target-dir= Example xtrabackup --defaults-file=/etc/my.cnf --user=some_user --password=XXXX --backup --target-dir=/mnt/data/backup/ Some other options --stream - can be used to stream the backup files to standard output in a specified format. xbstream is the only option for now. --tmp-dir - set this to a tmp directory to be used for temporary files while taking backups. --parallel - set this to the number of threads that can be used to parallely copy data files to target directory. --compress - by default - quicklz is used. Set this to have the backup in compressed format. Each file is a .qp compressed file and can be extracted by qpress file archiver. --decompress - decompresses all the files which were compressed with the .qp extension. It will not delete the .qp files after decompression. To do that, use --remove-original along with this. Please note that the decompress option should be run separately from the xtrabackup command that used the compress option. Preparing a backup Once the backup is done with the --backup option, we need to prepare it in order to restore it. This is done to make the datafiles consistent with point-in-time. There might have been some transactions going on while the backup was being executed and those have changed the data files. When we prepare a backup, all those transactions are applied to the data files. xtrabackup --prepare --target-dir= Example xtrabackup --prepare --target-dir=/mnt/data/backup/ It is not recommended to halt a process which is preparing the backup as that might cause data file corruption and backup cannot be used further. The backup will have to be taken again. Restoring a Full Backup To restore the backup which is created and prepared from above commands, just copy everything from the backup target-dir to the data-dir of MySQL server, change the ownership of all files to mysql user (the linux user used by MySQL server) and start mysql. Or the below command can be used as well, xtrabackup --defaults-file=/etc/my.cnf --copy-back --target-dir=/mnt/data/backups/ Note - the backup has to be prepared in order to restore it. Creating Incremental backups Percona Xtrabackup helps create incremental backups, i.e only the changes can be backed up since the last backup. Every InnoDB page contains a log sequence number or LSN that is also mentioned as one of the last lines of backup and prepare commands. xtrabackup: Transaction log of lsn to was copied. OR InnoDB: Shutdown completed; log sequence number completed OK! This indicates that the backup has been taken till the log sequence number mentioned. This is a key information in understanding incremental backups and working towards automating one. Incremental backups do not compare data files for changes, instead, they go through the InnoDB pages and compare their LSN to the last backup\u2019s LSN. So, without one full backup, the incremental backups are useless. The xtrabackup command creates a xtrabackup_checkpoint file which has the information about the LSN of the backup. Below are the key contents of the file:- backup_type = full-backuped | incremental from_lsn = 0 (full backup) | to_lsn of last backup to_lsn = last_lsn = There is a difference between to_lsn and last_lsn . When the last_lsn is more than to_lsn that means there are transactions that ran while we took the backup and are yet to be applied. That is what --prepare is used for. To take incremental backups, first, we require one full backup. xtrabackup --defaults-file=/etc/my.cnf --user=some_user --password=XXXX --backup --target-dir=/mnt/data/backup/full/ Let\u2019s assume the contents of the xtrabackup_checkpoint file to be as follows. backup_type = full-backuped from_lsn = 0 to_lsn = 1000 last_lsn = 1000 Now that we have one full backup, we can have an incremental backup that takes the changes. We will go with differential incremental backups. xtrabackup --defaults-file=/etc/my.cnf --user=some_user --password=XXXX --backup --target-dir=/mnt/data/backup/incr1/ --incremental-basedir=/mnt/data/backup/full/ There are delta files created in the incr1 directory like, ibdata1.delta , db1/tbl1.ibd.delta with the changes from the full directory. The xtrabackup_checkpoint file will thus have the following contents. backup_type = incremental from_lsn = 1000 to_lsn = 1500 last_lsn = 1500 Hence, the from_lsn here is equal to the to_lsn of the last backup or the basedir provided for the incremental backups. For the next incremental backup we can use this incremental backup as the basedir. xtrabackup --defaults-file=/etc/my.cnf --user=some_user --password=XXXX --backup --target-dir=/mnt/data/backup/incr2/ --incremental-basedir=/mnt/data/backup/incr1/ The xtrabackup_checkpoint file will thus have the following contents. backup_type = incremental from_lsn = 1500 to_lsn = 2000 last_lsn = 2200 Preparing Incremental backups Preparing incremental backups is not the same as preparing a full backup. When prepare runs, two operations are performed - committed transactions are applied on the data files and uncommitted transactions are rolled back . While preparing incremental backups, we have to skip rollback of uncommitted transactions as it is likely that they might get committed in the next incremental backup. If we rollback uncommitted transactions the further incremental backups cannot be applied. We use --apply-log-only option along with --prepare to avoid the rollback phase. From the last section, we had the following directories with complete backup /mnt/data/backup/full /mnt/data/backup/incr1 /mnt/data/backup/incr2 First, we prepare the full backup, but only with the --apply-log-only option. xtrabackup --prepare --apply-log-only --target-dir=/mnt/data/backup/full The output of the command will contain the following at the end. InnoDB: Shutdown complete; log sequence number 1000 Completed OK! Note the LSN mentioned at the end is the same as the to_lsn from the xtrabackup_checkpoint created for full backup. Next, we apply the changes from the first incremental backup to the full backup. xtrabackup --prepare --apply-log-only --target-dir=/mnt/data/backup/full --incremental-dir=/mnt/data/backup/incr1 This applies the delta files in the incremental directory to the full backup directory. It rolls the data files in the full backup directory forward to the time of incremental backup and applies the redo logs as usual. Lastly, we apply the last incremental backup same as the previous one with just a small change. xtrabackup --prepare --target-dir=/mnt/data/backup/full --incremental-dir=/mnt/data/backup/incr1 We do not have to use the --apply-log-only option with it. It applies the incr2 delta files to the full backup data files taking them forward, applies redo logs on them and finally rollbacks the uncommitted transactions to produce the final result. The data now present in the full backup directory can now be used to restore. Note - To create cumulative incremental backups, the incremental-basedir should always be the full backup directory for every incremental backup. While preparing, we can start with the full backup with the --apply-log-only option and use just the last incremental backup for the final --prepare as that has all the changes since the full backup. Restoring Incremental backups Once all the above steps are completed, restoring is the same as done for a full backup.","title":"Percona Xtrabackup"},{"location":"level101/databases_sql/backup_recovery/#further-reading","text":"MySQL Point-In-Time-Recovery Another MySQL backup tool - mysqlpump Another MySQL backup tool - mydumper A comparison between mysqldump, mysqlpump and mydumper Backup Best Practices","title":"Further Reading"},{"location":"level101/databases_sql/concepts/","text":"Relational DBs are used for data storage. Even a file can be used to store data, but relational DBs are designed with specific goals: Efficiency Ease of access and management Organized Handle relations between data (represented as tables) Transaction: a unit of work that can comprise multiple statements, executed together ACID properties Set of properties that guarantee data integrity of DB transactions Atomicity: Each transaction is atomic (succeeds or fails completely) Consistency: Transactions only result in valid state (which includes rules, constraints, triggers etc.) Isolation: Each transaction is executed independently of others safely within a concurrent system Durability: Completed transactions will not be lost due to any later failures Let\u2019s take some examples to illustrate the above properties. Account A has a balance of \u20b9200 & B has \u20b9400. Account A is transferring \u20b9100 to Account B. This transaction has a deduction from sender and an addition into the recipient\u2019s balance. If the first operation passes successfully while the second fails, A\u2019s balance would be \u20b9100 while B would be having \u20b9400 instead of \u20b9500. Atomicity in a DB ensures this partially failed transaction is rolled back. If the second operation above fails, it leaves the DB inconsistent (sum of balance of accounts before and after the operation is not the same). Consistency ensures that this does not happen. There are three operations, one to calculate interest for A\u2019s account, another to add that to A\u2019s account, then transfer \u20b9100 from B to A. Without isolation guarantees, concurrent execution of these 3 operations may lead to a different outcome every time. What happens if the system crashes before the transactions are written to disk? Durability ensures that the changes are applied correctly during recovery. Relational data Tables represent relations Columns (fields) represent attributes Rows are individual records Schema describes the structure of DB SQL A query language to interact with and manage data. CRUD operations - create, read, update, delete queries Management operations - create DBs/tables/indexes etc, backup, import/export, users, access controls Exercise: Classify the below queries into the four types - DDL (definition), DML(manipulation), DCL(control) and TCL(transactions) and explain in detail. insert, create, drop, delete, update, commit, rollback, truncate, alter, grant, revoke You can practise these in the lab section . Constraints Rules for data that can be stored. Query fails if you violate any of these defined on a table. Primary key: one or more columns that contain UNIQUE values, and cannot contain NULL values. A table can have only ONE primary key. An index on it is created by default. Foreign key: links two tables together. Its value(s) match a primary key in a different table \\ Not null: Does not allow null values \\ Unique: Value of column must be unique across all rows \\ Default: Provides a default value for a column if none is specified during insert Check: Allows only particular values (like Balance >= 0) Indexes Most indexes use B+ tree structure. Why use them: Speeds up queries (in large tables that fetch only a few rows, min/max queries, by eliminating rows from consideration etc) Types of indexes: unique, primary key, fulltext, secondary Write-heavy loads, mostly full table scans or accessing large number of rows etc. do not benefit from indexes Joins Allows you to fetch related data from multiple tables, linking them together with some common field. Powerful but also resource-intensive and makes scaling databases difficult. This is the cause of many slow performing queries when run at scale, and the solution is almost always to find ways to reduce the joins. Access control DBs have privileged accounts for admin tasks, and regular accounts for clients. There are finegrained controls on what actions(DDL, DML etc. discussed earlier )are allowed for these accounts. DB first verifies the user credentials (authentication), and then examines whether this user is permitted to perform the request (authorization) by looking up these information in some internal tables. Other controls include activity auditing that allows examining the history of actions done by a user, and resource limits which define the number of queries, connections etc. allowed. Popular databases Commercial, closed source - Oracle, Microsoft SQL Server, IBM DB2 Open source with optional paid support - MySQL, MariaDB, PostgreSQL Individuals and small companies have always preferred open source DBs because of the huge cost associated with commercial software. In recent times, even large organizations have moved away from commercial software to open source alternatives because of the flexibility and cost savings associated with it. Lack of support is no longer a concern because of the paid support available from the developer and third parties. MySQL is the most widely used open source DB, and it is widely supported by hosting providers, making it easy for anyone to use. It is part of the popular Linux-Apache-MySQL-PHP ( LAMP ) stack that became popular in the 2000s. We have many more choices for a programming language, but the rest of that stack is still widely used.","title":"Key Concepts"},{"location":"level101/databases_sql/concepts/#popular-databases","text":"Commercial, closed source - Oracle, Microsoft SQL Server, IBM DB2 Open source with optional paid support - MySQL, MariaDB, PostgreSQL Individuals and small companies have always preferred open source DBs because of the huge cost associated with commercial software. In recent times, even large organizations have moved away from commercial software to open source alternatives because of the flexibility and cost savings associated with it. Lack of support is no longer a concern because of the paid support available from the developer and third parties. MySQL is the most widely used open source DB, and it is widely supported by hosting providers, making it easy for anyone to use. It is part of the popular Linux-Apache-MySQL-PHP ( LAMP ) stack that became popular in the 2000s. We have many more choices for a programming language, but the rest of that stack is still widely used.","title":"Popular databases"},{"location":"level101/databases_sql/conclusion/","text":"Conclusion We have covered basic concepts of SQL databases. We have also covered some of the tasks that an SRE may be responsible for - there is so much more to learn and do. We hope this course gives you a good start and inspires you to explore further. Further reading More practice with online resources like this one Normalization Routines , triggers Views Transaction isolation levels Sharding Setting up HA , monitoring , backups","title":"Conclusion"},{"location":"level101/databases_sql/conclusion/#conclusion","text":"We have covered basic concepts of SQL databases. We have also covered some of the tasks that an SRE may be responsible for - there is so much more to learn and do. We hope this course gives you a good start and inspires you to explore further.","title":"Conclusion"},{"location":"level101/databases_sql/conclusion/#further-reading","text":"More practice with online resources like this one Normalization Routines , triggers Views Transaction isolation levels Sharding Setting up HA , monitoring , backups","title":"Further reading"},{"location":"level101/databases_sql/innodb/","text":"Why should you use this? General purpose, row level locking, ACID support, transactions, crash recovery and multi-version concurrency control etc. Architecture Key components: Memory: Buffer pool: LRU cache of frequently used data(table and index) to be processed directly from memory, which speeds up processing. Important for tuning performance. Change buffer: Caches changes to secondary index pages when those pages are not in the buffer pool and merges it when they are fetched. Merging may take a long time and impact live queries. It also takes up part of the buffer pool. Avoids the extra I/O to read secondary indexes in. Adaptive hash index: Supplements InnoDB\u2019s B-Tree indexes with fast hash lookup tables like a cache. Slight performance penalty for misses, also adds maintenance overhead of updating it. Hash collisions cause AHI rebuilding for large DBs. Log buffer: Holds log data before flush to disk. Size of each above memory is configurable, and impacts performance a lot. Requires careful analysis of workload, available resources, benchmarking and tuning for optimal performance. Disk: Tables: Stores data within rows and columns. Indexes: Helps find rows with specific column values quickly, avoids full table scans. Redo Logs: all transactions are written to them, and after a crash, the recovery process corrects data written by incomplete transactions and replays any pending ones. Undo Logs: Records associated with a single transaction that contains information about how to undo the latest change by a transaction.","title":"InnoDB"},{"location":"level101/databases_sql/innodb/#why-should-you-use-this","text":"General purpose, row level locking, ACID support, transactions, crash recovery and multi-version concurrency control etc.","title":"Why should you use this?"},{"location":"level101/databases_sql/innodb/#architecture","text":"","title":"Architecture"},{"location":"level101/databases_sql/innodb/#key-components","text":"Memory: Buffer pool: LRU cache of frequently used data(table and index) to be processed directly from memory, which speeds up processing. Important for tuning performance. Change buffer: Caches changes to secondary index pages when those pages are not in the buffer pool and merges it when they are fetched. Merging may take a long time and impact live queries. It also takes up part of the buffer pool. Avoids the extra I/O to read secondary indexes in. Adaptive hash index: Supplements InnoDB\u2019s B-Tree indexes with fast hash lookup tables like a cache. Slight performance penalty for misses, also adds maintenance overhead of updating it. Hash collisions cause AHI rebuilding for large DBs. Log buffer: Holds log data before flush to disk. Size of each above memory is configurable, and impacts performance a lot. Requires careful analysis of workload, available resources, benchmarking and tuning for optimal performance. Disk: Tables: Stores data within rows and columns. Indexes: Helps find rows with specific column values quickly, avoids full table scans. Redo Logs: all transactions are written to them, and after a crash, the recovery process corrects data written by incomplete transactions and replays any pending ones. Undo Logs: Records associated with a single transaction that contains information about how to undo the latest change by a transaction.","title":"Key components:"},{"location":"level101/databases_sql/intro/","text":"Relational Databases Prerequisites Complete Linux course Install Docker (for lab section) What to expect from this course You will have an understanding of what relational databases are, their advantages, and some MySQL specific concepts. What is not covered under this course In depth implementation details Advanced topics like normalization, sharding Specific tools for administration Introduction The main purpose of database systems is to manage data. This includes storage, adding new data, deleting unused data, updating existing data, retrieving data within a reasonable response time, other maintenance tasks to keep the system running etc. Pre-reads RDBMS Concepts Course Contents Key Concepts MySQL Architecture InnoDB Backup and Recovery MySQL Replication Operational Concepts SELECT Query Query Performance Lab Further Reading","title":"Introduction"},{"location":"level101/databases_sql/intro/#relational-databases","text":"","title":"Relational Databases"},{"location":"level101/databases_sql/intro/#prerequisites","text":"Complete Linux course Install Docker (for lab section)","title":"Prerequisites"},{"location":"level101/databases_sql/intro/#what-to-expect-from-this-course","text":"You will have an understanding of what relational databases are, their advantages, and some MySQL specific concepts.","title":"What to expect from this course"},{"location":"level101/databases_sql/intro/#what-is-not-covered-under-this-course","text":"In depth implementation details Advanced topics like normalization, sharding Specific tools for administration","title":"What is not covered under this course"},{"location":"level101/databases_sql/intro/#introduction","text":"The main purpose of database systems is to manage data. This includes storage, adding new data, deleting unused data, updating existing data, retrieving data within a reasonable response time, other maintenance tasks to keep the system running etc.","title":"Introduction"},{"location":"level101/databases_sql/intro/#pre-reads","text":"RDBMS Concepts","title":"Pre-reads"},{"location":"level101/databases_sql/intro/#course-contents","text":"Key Concepts MySQL Architecture InnoDB Backup and Recovery MySQL Replication Operational Concepts SELECT Query Query Performance Lab Further Reading","title":"Course Contents"},{"location":"level101/databases_sql/lab/","text":"Prerequisites Install Docker Setup Create a working directory named sos or something similar, and cd into it. Enter the following into a file named my.cnf under a directory named custom. sos $ cat custom/my.cnf [mysqld] # These settings apply to MySQL server # You can set port, socket path, buffer size etc. # Below, we are configuring slow query settings slow_query_log=1 slow_query_log_file=/var/log/mysqlslow.log long_query_time=1 Start a container and enable slow query log with the following: sos $ docker run --name db -v custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=realsecret -d mysql:8 sos $ docker cp custom/my.cnf $(docker ps -qf \"name=db\"):/etc/mysql/conf.d/custom.cnf sos $ docker restart $(docker ps -qf \"name=db\") Import a sample database sos $ git clone git@github.com:datacharmer/test_db.git sos $ docker cp test_db $(docker ps -qf \"name=db\"):/home/test_db/ sos $ docker exec -it $(docker ps -qf \"name=db\") bash root@3ab5b18b0c7d:/# cd /home/test_db/ root@3ab5b18b0c7d:/# mysql -uroot -prealsecret mysql < employees.sql root@3ab5b18b0c7d:/etc# touch /var/log/mysqlslow.log root@3ab5b18b0c7d:/etc# chown mysql:mysql /var/log/mysqlslow.log Workshop 1: Run some sample queries Run the following $ mysql -uroot -prealsecret mysql mysql> # inspect DBs and tables # the last 4 are MySQL internal DBs mysql> show databases; +--------------------+ | Database | +--------------------+ | employees | | information_schema | | mysql | | performance_schema | | sys | +--------------------+ > use employees; mysql> show tables; +----------------------+ | Tables_in_employees | +----------------------+ | current_dept_emp | | departments | | dept_emp | | dept_emp_latest_date | | dept_manager | | employees | | salaries | | titles | +----------------------+ # read a few rows mysql> select * from employees limit 5; # filter data by conditions mysql> select count(*) from employees where gender = 'M' limit 5; # find count of particular data mysql> select count(*) from employees where first_name = 'Sachin'; Workshop 2: Use explain and explain analyze to profile a query, identify and add indexes required for improving performance # View all indexes on table #(\\G is to output horizontally, replace it with a ; to get table output) mysql> show index from employees from employees\\G *************************** 1. row *************************** Table: employees Non_unique: 0 Key_name: PRIMARY Seq_in_index: 1 Column_name: emp_no Collation: A Cardinality: 299113 Sub_part: NULL Packed: NULL Null: Index_type: BTREE Comment: Index_comment: Visible: YES Expression: NULL # This query uses an index, idenitfied by 'key' field # By prefixing explain keyword to the command, # we get query plan (including key used) mysql> explain select * from employees where emp_no < 10005\\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: employees partitions: NULL type: range possible_keys: PRIMARY key: PRIMARY key_len: 4 ref: NULL rows: 4 filtered: 100.00 Extra: Using where # Compare that to the next query which does not utilize any index mysql> explain select first_name, last_name from employees where first_name = 'Sachin'\\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: employees partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 299113 filtered: 10.00 Extra: Using where # Let's see how much time this query takes mysql> explain analyze select first_name, last_name from employees where first_name = 'Sachin'\\G *************************** 1. row *************************** EXPLAIN: -> Filter: (employees.first_name = 'Sachin') (cost=30143.55 rows=29911) (actual time=28.284..3952.428 rows=232 loops=1) -> Table scan on employees (cost=30143.55 rows=299113) (actual time=0.095..1996.092 rows=300024 loops=1) # Cost(estimated by query planner) is 30143.55 # actual time=28.284ms for first row, 3952.428 for all rows # Now lets try adding an index and running the query again mysql> create index idx_firstname on employees(first_name); Query OK, 0 rows affected (1.25 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> explain analyze select first_name, last_name from employees where first_name = 'Sachin'; +--------------------------------------------------------------------------------------------------------------------------------------------+ | EXPLAIN | +--------------------------------------------------------------------------------------------------------------------------------------------+ | -> Index lookup on employees using idx_firstname (first_name='Sachin') (cost=81.20 rows=232) (actual time=0.551..2.934 rows=232 loops=1) | +--------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.01 sec) # Actual time=0.551ms for first row # 2.934ms for all rows. A huge improvement! # Also notice that the query involves only an index lookup, # and no table scan (reading all rows of table) # ..which vastly reduces load on the DB. Workshop 3: Identify slow queries on a MySQL server # Run the command below in two terminal tabs to open two shells into the container. docker exec -it $(docker ps -qf \"name=db\") bash # Open a mysql prompt in one of them and execute this command # We have configured to log queries that take longer than 1s, # so this sleep(3) will be logged mysql -uroot -prealsecret mysql mysql> select sleep(3); # Now, in the other terminal, tail the slow log to find details about the query root@62c92c89234d:/etc# tail -f /var/log/mysqlslow.log /usr/sbin/mysqld, Version: 8.0.21 (MySQL Community Server - GPL). started with: Tcp port: 3306 Unix socket: /var/run/mysqld/mysqld.sock Time Id Command Argument # Time: 2020-11-26T14:53:44.822348Z # User@Host: root[root] @ localhost [] Id: 9 # Query_time: 5.404938 Lock_time: 0.000000 Rows_sent: 1 Rows_examined: 1 use employees; # Time: 2020-11-26T14:53:58.015736Z # User@Host: root[root] @ localhost [] Id: 9 # Query_time: 10.000225 Lock_time: 0.000000 Rows_sent: 1 Rows_examined: 1 SET timestamp=1606402428; select sleep(3); These were simulated examples with minimal complexity. In real life, the queries would be much more complex and the explain/analyze and slow query logs would have more details.","title":"Lab"},{"location":"level101/databases_sql/mysql/","text":"MySQL architecture MySQL architecture enables you to select the right storage engine for your needs, and abstracts away all implementation details from the end users (application engineers and DBA ) who only need to know a consistent stable API. Application layer: Connection handling - each client gets its own connection which is cached for the duration of access) Authentication - server checks (username,password,host) info of client and allows/rejects connection Security: server determines whether the client has privileges to execute each query (check with show privileges command) Server layer: Services and utilities - backup/restore, replication, cluster etc SQL interface - clients run queries for data access and manipulation SQL parser - creates a parse tree from the query (lexical/syntactic/semantic analysis and code generation) Optimizer - optimizes queries using various algorithms and data available to it(table level stats), modifies queries, order of scanning, indexes to use etc. (check with explain command) Caches and buffers - cache stores query results, buffer pool(InnoDB) stores table and index data in LRU fashion Storage engine options: InnoDB: most widely used, transaction support, ACID compliant, supports row-level locking, crash recovery and multi-version concurrency control. Default since MySQL 5.5+. MyISAM: fast, does not support transactions, provides table-level locking, great for read-heavy workloads, mostly in web and data warehousing. Default upto MySQL 5.1. Archive: optimised for high speed inserts, compresses data as it is inserted, does not support transactions, ideal for storing and retrieving large amounts of seldom referenced historical, archived data Memory: tables in memory. Fastest engine, supports table-level locking, does not support transactions, ideal for creating temporary tables or quick lookups, data is lost after a shutdown CSV: stores data in CSV files, great for integrating into other applications that use this format \u2026 etc. It is possible to migrate from one storage engine to another. But this migration locks tables for all operations and is not online, as it changes the physical layout of the data. It takes a long time and is generally not recommended. Hence, choosing the right storage engine at the beginning is important. General guideline is to use InnoDB unless you have a specific need for one of the other storage engines. Running mysql> SHOW ENGINES; shows you the supported engines on your MySQL server.","title":"MySQL"},{"location":"level101/databases_sql/mysql/#mysql-architecture","text":"MySQL architecture enables you to select the right storage engine for your needs, and abstracts away all implementation details from the end users (application engineers and DBA ) who only need to know a consistent stable API. Application layer: Connection handling - each client gets its own connection which is cached for the duration of access) Authentication - server checks (username,password,host) info of client and allows/rejects connection Security: server determines whether the client has privileges to execute each query (check with show privileges command) Server layer: Services and utilities - backup/restore, replication, cluster etc SQL interface - clients run queries for data access and manipulation SQL parser - creates a parse tree from the query (lexical/syntactic/semantic analysis and code generation) Optimizer - optimizes queries using various algorithms and data available to it(table level stats), modifies queries, order of scanning, indexes to use etc. (check with explain command) Caches and buffers - cache stores query results, buffer pool(InnoDB) stores table and index data in LRU fashion Storage engine options: InnoDB: most widely used, transaction support, ACID compliant, supports row-level locking, crash recovery and multi-version concurrency control. Default since MySQL 5.5+. MyISAM: fast, does not support transactions, provides table-level locking, great for read-heavy workloads, mostly in web and data warehousing. Default upto MySQL 5.1. Archive: optimised for high speed inserts, compresses data as it is inserted, does not support transactions, ideal for storing and retrieving large amounts of seldom referenced historical, archived data Memory: tables in memory. Fastest engine, supports table-level locking, does not support transactions, ideal for creating temporary tables or quick lookups, data is lost after a shutdown CSV: stores data in CSV files, great for integrating into other applications that use this format \u2026 etc. It is possible to migrate from one storage engine to another. But this migration locks tables for all operations and is not online, as it changes the physical layout of the data. It takes a long time and is generally not recommended. Hence, choosing the right storage engine at the beginning is important. General guideline is to use InnoDB unless you have a specific need for one of the other storage engines. Running mysql> SHOW ENGINES; shows you the supported engines on your MySQL server.","title":"MySQL architecture"},{"location":"level101/databases_sql/operations/","text":"Explain and explain+analyze EXPLAIN analyzes query plans from the optimizer, including how tables are joined, which tables/rows are scanned etc. Explain analyze shows the above and additional info like execution cost, number of rows returned, time taken etc. This knowledge is useful to tweak queries and add indexes. Watch this performance tuning tutorial video . Checkout the lab section for a hands-on about indexes. Slow query logs Used to identify slow queries (configurable threshold), enabled in config or dynamically with a query Checkout the lab section about identifying slow queries. User management This includes creation and changes to users, like managing privileges, changing password etc. Backup and restore strategies, pros and cons Logical backup using mysqldump - slower but can be done online Physical backup (copy data directory or use xtrabackup) - quick backup/recovery. Copying data directory requires locking or shut down. xtrabackup is an improvement because it supports backups without shutting down (hot backup). Others - PITR, snapshots etc. Crash recovery process using redo logs After a crash, when you restart server it reads redo logs and replays modifications to recover Monitoring MySQL Key MySQL metrics: reads, writes, query runtime, errors, slow queries, connections, running threads, InnoDB metrics Key OS metrics: CPU, load, memory, disk I/O, network Replication Copies data from one instance to one or more instances. Helps in horizontal scaling, data protection, analytics and performance. Binlog dump thread on primary, replication I/O and SQL threads on secondary. Strategies include the standard async, semi async or group replication. High Availability Ability to cope with failure at software, hardware and network level. Essential for anyone who needs 99.9%+ uptime. Can be implemented with replication or clustering solutions from MySQL, Percona, Oracle etc. Requires expertise to setup and maintain. Failover can be manual, scripted or using tools like Orchestrator. Data directory Data is stored in a particular directory, with nested directories for the data contained in each database. There are also MySQL log files, InnoDB log files, server process ID file and some other configs. The data directory is configurable. MySQL configuration This can be done by passing parameters during startup , or in a file . There are a few standard paths where MySQL looks for config files, /etc/my.cnf is one of the commonly used paths. These options are organized under headers (mysqld for server and mysql for client), you can explore them more in the lab that follows. Logs MySQL has logs for various purposes - general query log, errors, binary logs (for replication), slow query log. Only error log is enabled by default (to reduce I/O and storage requirement), the others can be enabled when required - by specifying config parameters at startup or running commands at runtime. Log destination can also be tweaked with config parameters.","title":"Operational Concepts"},{"location":"level101/databases_sql/query_performance/","text":"Query Performance Improvement Query Performance is a very crucial aspect of relational databases. If not tuned correctly, the select queries can become slow and painful for the application, and for the MySQL server as well. The important task is to identify the slow queries and try to improve their performance by either rewriting them or creating proper indexes on the tables involved in it. The Slow Query Log The slow query log contains SQL statements that take a longer time to execute then set in the config parameter long_query_time. These queries are the candidates for optimization. There are some good utilities to summarize the slow query logs like, mysqldumpslow (provided by MySQL itself), pt-query-digest (provided by Percona), etc. Following are the config parameters that are used to enable and effectively catch slow queries Variable Explanation Example value slow_query_log Enables or disables slow query logs ON slow_query_log_file The location of the slow query log /var/lib/mysql/mysql-slow.log long_query_time Threshold time. The query that takes longer than this time is logged in slow query log 5 log_queries_not_using_indexes When enabled with the slow query log, the queries which do not make use of any index are also logged in the slow query log even though they take less time than long_query_time. ON So, for this section, we will be enabling slow_query_log , long_query_time will be kept to 0.3 (300 ms) , and log_queries_not_using index will be enabled as well. Below are the queries that we will execute on the employees database. select * from employees where last_name = 'Koblick'; select * from salaries where salary >= 100000; select * from titles where title = 'Manager'; select * from employees where year(hire_date) = 1995; select year(e.hire_date), max(s.salary) from employees e join salaries s on e.emp_no=s.emp_no group by year(e.hire_date); Now, queries 1 , 3 and 4 executed under 300 ms but if we check the slow query logs, we will find these queries logged as they are not using any of the index. Queries 2 and 5 are taking longer than 300ms and also not using any index. Use the following command to get the summary of the slow query log mysqldumpslow /var/lib/mysql/mysql-slow.log There are some more queries in the snapshot that were along with the queries mentioned. Mysqldumpslow replaces actual values that were used by N (in case of numbers) and S (in case of strings). That can be overridden by -a option, however that will increase the output lines if different values are used in similar queries. The EXPLAIN Plan The EXPLAIN command is used with any query that we want to analyze. It describes the query execution plan, how MySQL sees and executes the query. EXPLAIN works with Select, Insert, Update and Delete statements. It tells about different aspects of the query like, how tables are joined, indexes used or not, etc. The important thing here is to understand the basic Explain plan output of a query to determine its performance. Let's take the following query as an example, mysql> explain select * from salaries where salary = 100000; +----+-------------+----------+------------+------+---------------+------+---------+------+---------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+----------+------------+------+---------------+------+---------+------+---------+----------+-------------+ | 1 | SIMPLE | salaries | NULL | ALL | NULL | NULL | NULL | NULL | 2838426 | 10.00 | Using where | +----+-------------+----------+------------+------+---------------+------+---------+------+---------+----------+-------------+ 1 row in set, 1 warning (0.00 sec) The key aspects to understand in the above output are:- Partitions - the number of partitions considered while executing the query. It is only valid if the table is partitioned. Possible_keys - the list of indexes that were considered during creation of the execution plan. Key - the index that will be used while executing the query. Rows - the number of rows examined during the execution. Filtered - the percentage of rows that were filtered out of the rows examined. The maximum and most optimized result will have 100 in this field. Extra - this tells some extra information on how MySQL evaluates, whether the query is using only where clause to match target rows, any index or temporary table, etc. So, for the above query, we can determine that there are no partitions, there are no candidate indexes to be used and so no index is used at all, over 2M rows are examined and only 10% of them are included in the result, and lastly, only a where clause is used to match the target rows. Creating an Index Indexes are used to speed up selecting relevant rows for a given column value. Without an index, MySQL starts with the first row and goes through the entire table to find matching rows. If the table has too many rows, the operation becomes costly. With indexes, MySQL determines the position to start looking for the data without reading the full table. A primary key is also an index which is also the fastest and is stored along with the table data. Secondary indexes are stored outside of the table data and are used to further enhance the performance of SQL statements. Indexes are mostly stored as B-Trees, with some exceptions like spatial indexes use R-Trees and memory tables use hash indexes. There are 2 ways to create indexes:- While creating a table - if we know beforehand the columns that will drive the most number of where clauses in select queries, then we can put an index over them while creating a table. Altering a Table - To improve the performance of a troubling query, we create an index on a table which already has data in it using ALTER or CREATE INDEX command. This operation does not block the table but might take some time to complete depending on the size of the table. Let\u2019s look at the query that we discussed in the previous section. It\u2019s clear that scanning over 2M records is not a good idea when only 10% of those records are actually in the resultset. Hence, we create an index on the salary column of the salaries table. create index idx_salary on salaries(salary) OR alter table salaries add index idx_salary(salary) And the same explain plan now looks like this mysql> explain select * from salaries where salary = 100000; +----+-------------+----------+------------+------+---------------+------------+---------+-------+------+----------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+----------+------------+------+---------------+------------+---------+-------+------+----------+-------+ | 1 | SIMPLE | salaries | NULL | ref | idx_salary | idx_salary | 4 | const | 13 | 100.00 | NULL | +----+-------------+----------+------------+------+---------------+------------+---------+-------+------+----------+-------+ 1 row in set, 1 warning (0.00 sec) Now the index used is idx_salary, the one we recently created. The index actually helped examine only 13 records and all of them are in the resultset. Also, the query execution time is also reduced from over 700ms to almost negligible. Let\u2019s look at another example. Here we are searching for a specific combination of first_name and last_name. But, we might also search based on last_name only. mysql> explain select * from employees where last_name = 'Dredge' and first_name = 'Yinghua'; +----+-------------+-----------+------------+------+---------------+------+---------+------+--------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+------+---------------+------+---------+------+--------+----------+-------------+ | 1 | SIMPLE | employees | NULL | ALL | NULL | NULL | NULL | NULL | 299468 | 1.00 | Using where | +----+-------------+-----------+------------+------+---------------+------+---------+------+--------+----------+-------------+ 1 row in set, 1 warning (0.00 sec) Now only 1% record out of almost 300K is the resultset. Although the query time is particularly quick as we have only 300K records, this will be a pain if the number of records are over millions. In this case, we create an index on last_name and first_name, not separately, but a composite index including both the columns. create index idx_last_first on employees(last_name, first_name) mysql> explain select * from employees where last_name = 'Dredge' and first_name = 'Yinghua'; +----+-------------+-----------+------------+------+----------------+----------------+---------+-------------+------+----------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+------+----------------+----------------+---------+-------------+------+----------+-------+ | 1 | SIMPLE | employees | NULL | ref | idx_last_first | idx_last_first | 124 | const,const | 1 | 100.00 | NULL | +----+-------------+-----------+------------+------+----------------+----------------+---------+-------------+------+----------+-------+ 1 row in set, 1 warning (0.00 sec) We chose to put last_name before first_name while creating the index as the optimizer starts from the leftmost prefix of the index while evaluating the query. For example, if we have a 3-column index like idx(c1, c2, c3), then the search capability of the index follows - (c1), (c1, c2) or (c1, c2, c3) i.e. if your where clause has only first_name this index won\u2019t work. mysql> explain select * from employees where first_name = 'Yinghua'; +----+-------------+-----------+------------+------+---------------+------+---------+------+--------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+------+---------------+------+---------+------+--------+----------+-------------+ | 1 | SIMPLE | employees | NULL | ALL | NULL | NULL | NULL | NULL | 299468 | 10.00 | Using where | +----+-------------+-----------+------------+------+---------------+------+---------+------+--------+----------+-------------+ 1 row in set, 1 warning (0.00 sec) But, if you have only the last_name in the where clause, it will work as expected. mysql> explain select * from employees where last_name = 'Dredge'; +----+-------------+-----------+------------+------+----------------+----------------+---------+-------+------+----------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+------+----------------+----------------+---------+-------+------+----------+-------+ | 1 | SIMPLE | employees | NULL | ref | idx_last_first | idx_last_first | 66 | const | 200 | 100.00 | NULL | +----+-------------+-----------+------------+------+----------------+----------------+---------+-------+------+----------+-------+ 1 row in set, 1 warning (0.00 sec) For another example, use the following queries:- create table employees_2 like employees; create table salaries_2 like salaries; alter table salaries_2 drop primary key; We made copies of employees and salaries tables without the Primary Key of salaries table to understand an example of Select with Join. When you have queries like the below, it becomes tricky to identify the pain point of the query. mysql> select e.first_name, e.last_name, s.salary, e.hire_date from employees_2 e join salaries_2 s on e.emp_no=s.emp_no where e.last_name='Dredge'; 1860 rows in set (4.44 sec) This query is taking about 4.5 seconds to complete with 1860 rows in the resultset. Let\u2019s look at the Explain plan. There will be 2 records in the Explain plan as 2 tables are used in the query. mysql> explain select e.first_name, e.last_name, s.salary, e.hire_date from employees_2 e join salaries_2 s on e.emp_no=s.emp_no where e.last_name='Dredge'; +----+-------------+-------+------------+--------+------------------------+---------+---------+--------------------+---------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+--------+------------------------+---------+---------+--------------------+---------+----------+-------------+ | 1 | SIMPLE | s | NULL | ALL | NULL | NULL | NULL | NULL | 2837194 | 100.00 | NULL | | 1 | SIMPLE | e | NULL | eq_ref | PRIMARY,idx_last_first | PRIMARY | 4 | employees.s.emp_no | 1 | 5.00 | Using where | +----+-------------+-------+------------+--------+------------------------+---------+---------+--------------------+---------+----------+-------------+ 2 rows in set, 1 warning (0.00 sec) These are in order of evaluation i.e. salaries_2 will be evaluated first and then employees_2 will be joined to it. As it looks like, it scans almost all the rows of salaries_2 table and tries to match the employees_2 rows as per the join condition. Though where clause is used in fetching the final resultset, but the index corresponding to the where clause is not used for the employees_2 table. If the join is done on two indexes which have the same data-types, it will always be faster. So, let\u2019s create an index on the emp_no column of salaries_2 table and analyze the query again. create index idx_empno on salaries_2(emp_no); mysql> explain select e.first_name, e.last_name, s.salary, e.hire_date from employees_2 e join salaries_2 s on e.emp_no=s.emp_no where e.last_name='Dredge'; +----+-------------+-------+------------+------+------------------------+----------------+---------+--------------------+------+----------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+------+------------------------+----------------+---------+--------------------+------+----------+-------+ | 1 | SIMPLE | e | NULL | ref | PRIMARY,idx_last_first | idx_last_first | 66 | const | 200 | 100.00 | NULL | | 1 | SIMPLE | s | NULL | ref | idx_empno | idx_empno | 4 | employees.e.emp_no | 9 | 100.00 | NULL | +----+-------------+-------+------------+------+------------------------+----------------+---------+--------------------+------+----------+-------+ 2 rows in set, 1 warning (0.00 sec) Now, not only did the index help the optimizer to examine only a few rows in both tables, it reversed the order of the tables in evaluation. The employees_2 table is evaluated first and rows are selected as per the index respective to the where clause. Then the records are joined to salaries_2 table as per the index used due to the join condition. The execution time of the query came down from 4.5s to 0.02s . mysql> select e.first_name, e.last_name, s.salary, e.hire_date from employees_2 e join salaries_2 s on e.emp_no=s.emp_no where e.last_name='Dredge'\\G 1860 rows in set (0.02 sec)","title":"Query Performance"},{"location":"level101/databases_sql/query_performance/#query-performance-improvement","text":"Query Performance is a very crucial aspect of relational databases. If not tuned correctly, the select queries can become slow and painful for the application, and for the MySQL server as well. The important task is to identify the slow queries and try to improve their performance by either rewriting them or creating proper indexes on the tables involved in it.","title":"Query Performance Improvement"},{"location":"level101/databases_sql/query_performance/#the-slow-query-log","text":"The slow query log contains SQL statements that take a longer time to execute then set in the config parameter long_query_time. These queries are the candidates for optimization. There are some good utilities to summarize the slow query logs like, mysqldumpslow (provided by MySQL itself), pt-query-digest (provided by Percona), etc. Following are the config parameters that are used to enable and effectively catch slow queries Variable Explanation Example value slow_query_log Enables or disables slow query logs ON slow_query_log_file The location of the slow query log /var/lib/mysql/mysql-slow.log long_query_time Threshold time. The query that takes longer than this time is logged in slow query log 5 log_queries_not_using_indexes When enabled with the slow query log, the queries which do not make use of any index are also logged in the slow query log even though they take less time than long_query_time. ON So, for this section, we will be enabling slow_query_log , long_query_time will be kept to 0.3 (300 ms) , and log_queries_not_using index will be enabled as well. Below are the queries that we will execute on the employees database. select * from employees where last_name = 'Koblick'; select * from salaries where salary >= 100000; select * from titles where title = 'Manager'; select * from employees where year(hire_date) = 1995; select year(e.hire_date), max(s.salary) from employees e join salaries s on e.emp_no=s.emp_no group by year(e.hire_date); Now, queries 1 , 3 and 4 executed under 300 ms but if we check the slow query logs, we will find these queries logged as they are not using any of the index. Queries 2 and 5 are taking longer than 300ms and also not using any index. Use the following command to get the summary of the slow query log mysqldumpslow /var/lib/mysql/mysql-slow.log There are some more queries in the snapshot that were along with the queries mentioned. Mysqldumpslow replaces actual values that were used by N (in case of numbers) and S (in case of strings). That can be overridden by -a option, however that will increase the output lines if different values are used in similar queries.","title":"The Slow Query Log"},{"location":"level101/databases_sql/query_performance/#the-explain-plan","text":"The EXPLAIN command is used with any query that we want to analyze. It describes the query execution plan, how MySQL sees and executes the query. EXPLAIN works with Select, Insert, Update and Delete statements. It tells about different aspects of the query like, how tables are joined, indexes used or not, etc. The important thing here is to understand the basic Explain plan output of a query to determine its performance. Let's take the following query as an example, mysql> explain select * from salaries where salary = 100000; +----+-------------+----------+------------+------+---------------+------+---------+------+---------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+----------+------------+------+---------------+------+---------+------+---------+----------+-------------+ | 1 | SIMPLE | salaries | NULL | ALL | NULL | NULL | NULL | NULL | 2838426 | 10.00 | Using where | +----+-------------+----------+------------+------+---------------+------+---------+------+---------+----------+-------------+ 1 row in set, 1 warning (0.00 sec) The key aspects to understand in the above output are:- Partitions - the number of partitions considered while executing the query. It is only valid if the table is partitioned. Possible_keys - the list of indexes that were considered during creation of the execution plan. Key - the index that will be used while executing the query. Rows - the number of rows examined during the execution. Filtered - the percentage of rows that were filtered out of the rows examined. The maximum and most optimized result will have 100 in this field. Extra - this tells some extra information on how MySQL evaluates, whether the query is using only where clause to match target rows, any index or temporary table, etc. So, for the above query, we can determine that there are no partitions, there are no candidate indexes to be used and so no index is used at all, over 2M rows are examined and only 10% of them are included in the result, and lastly, only a where clause is used to match the target rows.","title":"The EXPLAIN Plan"},{"location":"level101/databases_sql/query_performance/#creating-an-index","text":"Indexes are used to speed up selecting relevant rows for a given column value. Without an index, MySQL starts with the first row and goes through the entire table to find matching rows. If the table has too many rows, the operation becomes costly. With indexes, MySQL determines the position to start looking for the data without reading the full table. A primary key is also an index which is also the fastest and is stored along with the table data. Secondary indexes are stored outside of the table data and are used to further enhance the performance of SQL statements. Indexes are mostly stored as B-Trees, with some exceptions like spatial indexes use R-Trees and memory tables use hash indexes. There are 2 ways to create indexes:- While creating a table - if we know beforehand the columns that will drive the most number of where clauses in select queries, then we can put an index over them while creating a table. Altering a Table - To improve the performance of a troubling query, we create an index on a table which already has data in it using ALTER or CREATE INDEX command. This operation does not block the table but might take some time to complete depending on the size of the table. Let\u2019s look at the query that we discussed in the previous section. It\u2019s clear that scanning over 2M records is not a good idea when only 10% of those records are actually in the resultset. Hence, we create an index on the salary column of the salaries table. create index idx_salary on salaries(salary) OR alter table salaries add index idx_salary(salary) And the same explain plan now looks like this mysql> explain select * from salaries where salary = 100000; +----+-------------+----------+------------+------+---------------+------------+---------+-------+------+----------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+----------+------------+------+---------------+------------+---------+-------+------+----------+-------+ | 1 | SIMPLE | salaries | NULL | ref | idx_salary | idx_salary | 4 | const | 13 | 100.00 | NULL | +----+-------------+----------+------------+------+---------------+------------+---------+-------+------+----------+-------+ 1 row in set, 1 warning (0.00 sec) Now the index used is idx_salary, the one we recently created. The index actually helped examine only 13 records and all of them are in the resultset. Also, the query execution time is also reduced from over 700ms to almost negligible. Let\u2019s look at another example. Here we are searching for a specific combination of first_name and last_name. But, we might also search based on last_name only. mysql> explain select * from employees where last_name = 'Dredge' and first_name = 'Yinghua'; +----+-------------+-----------+------------+------+---------------+------+---------+------+--------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+------+---------------+------+---------+------+--------+----------+-------------+ | 1 | SIMPLE | employees | NULL | ALL | NULL | NULL | NULL | NULL | 299468 | 1.00 | Using where | +----+-------------+-----------+------------+------+---------------+------+---------+------+--------+----------+-------------+ 1 row in set, 1 warning (0.00 sec) Now only 1% record out of almost 300K is the resultset. Although the query time is particularly quick as we have only 300K records, this will be a pain if the number of records are over millions. In this case, we create an index on last_name and first_name, not separately, but a composite index including both the columns. create index idx_last_first on employees(last_name, first_name) mysql> explain select * from employees where last_name = 'Dredge' and first_name = 'Yinghua'; +----+-------------+-----------+------------+------+----------------+----------------+---------+-------------+------+----------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+------+----------------+----------------+---------+-------------+------+----------+-------+ | 1 | SIMPLE | employees | NULL | ref | idx_last_first | idx_last_first | 124 | const,const | 1 | 100.00 | NULL | +----+-------------+-----------+------------+------+----------------+----------------+---------+-------------+------+----------+-------+ 1 row in set, 1 warning (0.00 sec) We chose to put last_name before first_name while creating the index as the optimizer starts from the leftmost prefix of the index while evaluating the query. For example, if we have a 3-column index like idx(c1, c2, c3), then the search capability of the index follows - (c1), (c1, c2) or (c1, c2, c3) i.e. if your where clause has only first_name this index won\u2019t work. mysql> explain select * from employees where first_name = 'Yinghua'; +----+-------------+-----------+------------+------+---------------+------+---------+------+--------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+------+---------------+------+---------+------+--------+----------+-------------+ | 1 | SIMPLE | employees | NULL | ALL | NULL | NULL | NULL | NULL | 299468 | 10.00 | Using where | +----+-------------+-----------+------------+------+---------------+------+---------+------+--------+----------+-------------+ 1 row in set, 1 warning (0.00 sec) But, if you have only the last_name in the where clause, it will work as expected. mysql> explain select * from employees where last_name = 'Dredge'; +----+-------------+-----------+------------+------+----------------+----------------+---------+-------+------+----------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------+------------+------+----------------+----------------+---------+-------+------+----------+-------+ | 1 | SIMPLE | employees | NULL | ref | idx_last_first | idx_last_first | 66 | const | 200 | 100.00 | NULL | +----+-------------+-----------+------------+------+----------------+----------------+---------+-------+------+----------+-------+ 1 row in set, 1 warning (0.00 sec) For another example, use the following queries:- create table employees_2 like employees; create table salaries_2 like salaries; alter table salaries_2 drop primary key; We made copies of employees and salaries tables without the Primary Key of salaries table to understand an example of Select with Join. When you have queries like the below, it becomes tricky to identify the pain point of the query. mysql> select e.first_name, e.last_name, s.salary, e.hire_date from employees_2 e join salaries_2 s on e.emp_no=s.emp_no where e.last_name='Dredge'; 1860 rows in set (4.44 sec) This query is taking about 4.5 seconds to complete with 1860 rows in the resultset. Let\u2019s look at the Explain plan. There will be 2 records in the Explain plan as 2 tables are used in the query. mysql> explain select e.first_name, e.last_name, s.salary, e.hire_date from employees_2 e join salaries_2 s on e.emp_no=s.emp_no where e.last_name='Dredge'; +----+-------------+-------+------------+--------+------------------------+---------+---------+--------------------+---------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+--------+------------------------+---------+---------+--------------------+---------+----------+-------------+ | 1 | SIMPLE | s | NULL | ALL | NULL | NULL | NULL | NULL | 2837194 | 100.00 | NULL | | 1 | SIMPLE | e | NULL | eq_ref | PRIMARY,idx_last_first | PRIMARY | 4 | employees.s.emp_no | 1 | 5.00 | Using where | +----+-------------+-------+------------+--------+------------------------+---------+---------+--------------------+---------+----------+-------------+ 2 rows in set, 1 warning (0.00 sec) These are in order of evaluation i.e. salaries_2 will be evaluated first and then employees_2 will be joined to it. As it looks like, it scans almost all the rows of salaries_2 table and tries to match the employees_2 rows as per the join condition. Though where clause is used in fetching the final resultset, but the index corresponding to the where clause is not used for the employees_2 table. If the join is done on two indexes which have the same data-types, it will always be faster. So, let\u2019s create an index on the emp_no column of salaries_2 table and analyze the query again. create index idx_empno on salaries_2(emp_no); mysql> explain select e.first_name, e.last_name, s.salary, e.hire_date from employees_2 e join salaries_2 s on e.emp_no=s.emp_no where e.last_name='Dredge'; +----+-------------+-------+------------+------+------------------------+----------------+---------+--------------------+------+----------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+------+------------------------+----------------+---------+--------------------+------+----------+-------+ | 1 | SIMPLE | e | NULL | ref | PRIMARY,idx_last_first | idx_last_first | 66 | const | 200 | 100.00 | NULL | | 1 | SIMPLE | s | NULL | ref | idx_empno | idx_empno | 4 | employees.e.emp_no | 9 | 100.00 | NULL | +----+-------------+-------+------------+------+------------------------+----------------+---------+--------------------+------+----------+-------+ 2 rows in set, 1 warning (0.00 sec) Now, not only did the index help the optimizer to examine only a few rows in both tables, it reversed the order of the tables in evaluation. The employees_2 table is evaluated first and rows are selected as per the index respective to the where clause. Then the records are joined to salaries_2 table as per the index used due to the join condition. The execution time of the query came down from 4.5s to 0.02s . mysql> select e.first_name, e.last_name, s.salary, e.hire_date from employees_2 e join salaries_2 s on e.emp_no=s.emp_no where e.last_name='Dredge'\\G 1860 rows in set (0.02 sec)","title":"Creating an Index"},{"location":"level101/databases_sql/replication/","text":"MySQL Replication Replication enables data from one MySQL host (termed as Primary) to be copied to another MySQL host (termed as Replica). MySQL Replication is asynchronous in nature by default, but it can be changed to semi-synchronous with some configurations. Some common applications of MySQL replication are:- Read-scaling - as multiple hosts can replicate the data from a single primary host, we can set up as many replicas as we need and scale reads through them, i.e. application writes will go to a single primary host and the reads can balance between all the replicas that are there. Such a setup can improve the write performance as well, as the primary is dedicated to only updates and not reads. Backups using replicas - the backup process can sometimes be a little heavy. But if we have replicas configured, then we can use one of them to get the backup without affecting the primary data at all. Disaster Recovery - a replica in some other geographical region paves a proper path to configure disaster recovery. MySQL supports different types of synchronizations as well:- Asynchronous - this is the default synchronization method. It is one-way, i.e. one host serves as primary and one or more hosts as replica. We will discuss this method throughout the replication topic. Semi-Synchronous - in this type of synchronization, a commit performed on the primary host is blocked until at least one replica acknowledges it. Post the acknowledgement from any one replica, the control is returned to the session that performed the transaction. This ensures strong consistency but the replication is slower than asynchronous. Delayed - we can deliberately lag the replica in a typical MySQL replication by the number of seconds desired by the use case. This type of replication safeguards from severe human errors of dropping or corrupting the data on the primary, for example, in the above diagram for Delayed Replication, if a DROP DATABASE is executed by mistake on the primary, we still have 30 minutes to recover the data from R2 as that command has not been replicated on R2 yet. Pre-Requisites Before we dive into setting up replication, we should know about the binary logs. Binary logs play a very important role in MySQL replication. Binary logs, or commonly known as binlogs contain events about the changes done to the database, like table structure changes, data changes via DML operations, etc. They are not used to log SELECT statements. For replication, the primary sends the information to the replicas using its binlogs about the changes done to the database, and the replicas make the same data changes. With respect to MySQL replication, the binary log format can be of two types that decides the main type of replication:- - Statement-Based Replication or SBR - Row-Based Replication or RBR Statement Based Binlog Format Originally, the replication in MySQL was based on SQL statements getting replicated and executed on the replica from the primary. This is called statement based logging. The binlog contains the exact SQL statement run by the session. So If we run the above statements to insert 3 records and the update 3 in a single update statement, they will be logged exactly the same as when we executed them. Row Based Binlog Format The Row based is the default one in the latest MySQL releases. This is a lot different from the Statement format as here, row events are logged instead of statements. By that we mean, in the above example one update statement affected 3 records, but binlog had only one update statement; if it is a row based format, binlog will have an event for each record updated. Statement Based v/s Row Based binlogs Let\u2019s have a look at the operational differences between statement-based and row-based binlogs. Statement Based Row Based Logs SQL statements as executed Logs row events based on SQL statements executed Takes lesser disk space Takes more disk space Restoring using binlogs is faster Restoring using binlogs is slower When used for replication, if any statement has a predefined function that has its own value, like sysdate(), uuid() etc, the output could be different on the replica which makes it inconsistent. Whatever is executed becomes a row event with values, so there will be no problem if such functions are used in SQL statements. Only statements are logged so no other row events are generated. A lot of events are generated when a table is copied into another using INSERT INTO SELECT. Note - There is another type of binlog format called Mixed . With mixed logging, statement based is used by default but it switches to row based in certain cases. If MySQL cannot guarantee that statement based logging is safe for the statements executed, it issues a warning and switches to row based for those statements. We will be using binary log format as Row for the entire replication topic. Replication in Motion The above figure indicates how a typical MySQL replication works. Replica_IO_Thread is responsible to fetch the binlog events from the primary binary logs to the replica On the Replica host, relay logs are created which are exact copies of the binary logs. If the binary logs on primary are in row format, the relay logs will be the same. Replica_SQL_Thread applies the relay logs on the replica MySQL server. If log-bin is enabled on the replica, then the replica will have its own binary logs as well. If log-slave-updates is enabled, then it will have the updates from the primary logged in the binlogs as well. Setting up Replication In this section, we will set up a simple asynchronous replication. The binlogs will be in row based format. The replication will be set up on two fresh hosts with no prior data present. There are two different ways in which we can set up replication. Binlog based - Each replica keeps a record of the binlog coordinates on the primary - current binlog and position in the binlog till where it has read and processed. So, at a time different replicas might be reading different parts of the same binlog. GTID based - Every transaction gets an identifier called global transaction identifier or GTID. There is no need to keep the record of binlog coordinates, as long as the replica has all the GTIDs executed on the primary, it is consistent with the primary. A typical GTID is the server_uuid:# positive integer. We will set up a GTID based replication in the following section but will also discuss binlog based replication setup as well. Primary Host Configurations The following config parameters should be present in the primary my.cnf file for setting up GTID based replication. server-id - a unique ID for the mysql server log-bin - the binlog location binlog-format - ROW | STATEMENT (we will use ROW) gtid-mode - ON enforce-gtid-consistency - ON (allows execution of only those statements which can be logged using GTIDs) Replica Host Configurations The following config parameters should be present in the replica my.cnf file for setting up replication. server-id - different than the primary host log-bin - (optional, if you want replica to log its own changes as well) binlog-format - depends on the above gtid-mode - ON enforce-gtid-consistency - ON log-slave-updates - ON (if binlog is enabled, then we can enable this. This enables the replica to log the changes coming from the primary along with its own changes. Helps in setting up chain replication) Replication User Every replica connects to the primary using a mysql user for replicating. So there must be a mysql user account for the same on the primary host. Any user can be used for this purpose provided it has REPLICATION SLAVE privilege. If the sole purpose is replication then we can have a user with only the required privilege. On the primary host mysql> create user repl_user@ identified by 'xxxxx'; mysql> grant replication slave on *.* to repl_user@''; Obtaining Starting position from Primary Run the following command on the primary host mysql> show master status\\G *************************** 1. row *************************** File: mysql-bin.000001 Position: 73 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: e17d0920-d00e-11eb-a3e6-000d3aa00f87:1-3 1 row in set (0.00 sec) If we are working with binary log based replication, the top two output lines are the most important ones. That tells the current binlog on the primary host and till what position it has written. For fresh hosts we know that no data is written so we can directly set up replication using the very first binlog file and position 4. If we are setting up a replication from a backup, then that changes the way we obtain the starting position. For GTIDs, the executed_gtid_set is the value where primary is right now. Again, for a fresh setup, we don\u2019t have to specify anything about the starting point and it will start from the transaction id 1, but when we set up from a backup, the backup will contain the GTID positions till where backup has been taken. Setting up Replica The replication setup must know about the primary host, the user and password to connect, the binlog coordinates (for binlog based replication) or the GTID auto-position parameter. The following command is used for setting up change master to master_host = '', master_port = , master_user = 'repl_user', master_password = 'xxxxx', master_auto_position = 1; Note - the Change Master To command has been replaced with Change Replication Source To from Mysql 8.0.23 onwards, also all the master and slave keywords are replaced with source and replica . If it is binlog based replication, then instead of master_auto_position, we need to specify the binlog coordinates. master_log_file = 'mysql-bin.000001', master_log_pos = 4 Starting Replication and Check Status Now that everything is configured, we just need to start the replication on the replica via the following command start slave; OR from MySQL 8.0.23 onwards, start replica; Whether or not the replication is running successfully, we can determine by running the following command show slave status\\G OR from MySQL 8.0.23 onwards, show replica status\\G mysql> show replica status\\G *************************** 1. row *************************** Replica_IO_State: Waiting for master to send event Source_Host: Source_User: repl_user Source_Port: Connect_Retry: 60 Source_Log_File: mysql-bin.000001 Read_Source_Log_Pos: 852 Relay_Log_File: mysql-relay-bin.000002 Relay_Log_Pos: 1067 Relay_Source_Log_File: mysql-bin.000001 Replica_IO_Running: Yes Replica_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Source_Log_Pos: 852 Relay_Log_Space: 1283 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Source_SSL_Allowed: No Source_SSL_CA_File: Source_SSL_CA_Path: Source_SSL_Cert: Source_SSL_Cipher: Source_SSL_Key: Seconds_Behind_Source: 0 Source_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Source_Server_Id: 1 Source_UUID: e17d0920-d00e-11eb-a3e6-000d3aa00f87 Source_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Replica_SQL_Running_State: Slave has read all relay log; waiting for more updates Source_Retry_Count: 86400 Source_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Source_SSL_Crl: Source_SSL_Crlpath: Retrieved_Gtid_Set: e17d0920-d00e-11eb-a3e6-000d3aa00f87:1-3 Executed_Gtid_Set: e17d0920-d00e-11eb-a3e6-000d3aa00f87:1-3 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: Source_TLS_Version: Source_public_key_path: Get_Source_public_key: 0 Network_Namespace: 1 row in set (0.00 sec) Some of the parameters are explained below:- Relay_Source_Log_File - the primary\u2019s file where replica is currently reading from Execute_Source_Log_Pos - for the above file on which position is the replica reading currently from. These two parameters are of utmost importance when binlog based replication is used. Replica_IO_Running - IO thread of replica is running or not Replica_SQL_Running - SQL thread of replica is running or not Seconds_Behind_Source - the difference of seconds when a statement was executed on Primary and then on Replica. This indicates how much replication lag is there. Source_UUID - the uuid of the primary host Retrieved_Gtid_Set - the GTIDs fetched from the primary host by the replica to be executed. Executed_Gtid_Set - the GTIDs executed on the replica. This set remains the same for the entire cluster if the replicas are in sync. Auto_Position - it directs the replica to fetch the next GTID automatically Create a Replica for the already setup cluster The steps discussed in the previous section talks about the setting up replication on two fresh hosts. When we have to set up a replica for a host which is already serving applications, then the backup of the primary is used, either fresh backup taken for the replica (should only be done if the traffic it is serving is less) or use a recently taken backup. If the size of the databases on the MySQL primary server is small, less than 100G recommended, then mysqldump can be used to take backup along with the following options. mysqldump -uroot -p -hhost_ip -P3306 --all-databases --single-transaction --master-data=1 > primary_host.bkp --single-transaction - this option starts a transaction before taking the backup which ensures it is consistent. As transactions are isolated from each other, so no other writes affect the backup. --master-data - this option is required if binlog based replication is desired to be set up. It includes the binary log file and log file position in the backup file. When GTID mode is enabled and mysqldump is executed, it includes the GTID executed to be used to start the replica after the backup position. The contents of the mysqldump output file will have the following It is recommended to comment these before restoring otherwise they could throw errors. Also, using master-data=2 will automatically comment the master_log_file line. Similarly, when taking backup of the host using xtrabackup , the file xtrabckup_info file contains the information about binlog file and file position, as well as the GTID executed set. server_version = 8.0.25 start_time = 2021-06-22 03:45:17 end_time = 2021-06-22 03:45:20 lock_time = 0 binlog_pos = filename 'mysql-bin.000007', position '196', GTID of the last change 'e17d0920-d00e-11eb-a3e6-000d3aa00f87:1-5' innodb_from_lsn = 0 innodb_to_lsn = 18153149 partial = N incremental = N format = file compressed = N encrypted = N Now, after setting MySQL server on the desired host, restore the backup taken from any one of the above methods. If the intended way is binlog based replication, then use the binlog file and position info in the following command change Replication Source to source_host = \u2018primary_ip\u2019, source_port = 3306, source_user = \u2018repl_user\u2019, source_password = \u2018xxxxx\u2019, source_log_file = \u2018mysql-bin.000007\u2019, source_log_pos = \u2018196\u2019; If the replication needs to be set via GITDs, then run the below command to tell the replica about the GTIDs already executed. On the Replica host, run th following commands reset master; set global gtid_purged = \u2018e17d0920-d00e-11eb-a3e6-000d3aa00f87:1-5\u2019 change replication source to source_host = \u2018primary_ip\u2019, source_port = 3306, source_user = \u2018repl_user\u2019, source_password = \u2018xxxxx\u2019, source_auto_position = 1 The reset master command resets the position of the binary log to initial. It can be skipped if the host is a freshly installed MySQL, but we restored a backup so it is necessary. The gtid_purged global variable lets the replica know the GTIDs that have already been executed, so that the replication can start after that. Then in the change source command, we set the auto-position to 1 which automatically gets the next GTID to proceed. Further Reading More applications of Replication Automtaed Failovers using MySQL Orchestrator","title":"MySQL Replication"},{"location":"level101/databases_sql/replication/#mysql-replication","text":"Replication enables data from one MySQL host (termed as Primary) to be copied to another MySQL host (termed as Replica). MySQL Replication is asynchronous in nature by default, but it can be changed to semi-synchronous with some configurations. Some common applications of MySQL replication are:- Read-scaling - as multiple hosts can replicate the data from a single primary host, we can set up as many replicas as we need and scale reads through them, i.e. application writes will go to a single primary host and the reads can balance between all the replicas that are there. Such a setup can improve the write performance as well, as the primary is dedicated to only updates and not reads. Backups using replicas - the backup process can sometimes be a little heavy. But if we have replicas configured, then we can use one of them to get the backup without affecting the primary data at all. Disaster Recovery - a replica in some other geographical region paves a proper path to configure disaster recovery. MySQL supports different types of synchronizations as well:- Asynchronous - this is the default synchronization method. It is one-way, i.e. one host serves as primary and one or more hosts as replica. We will discuss this method throughout the replication topic. Semi-Synchronous - in this type of synchronization, a commit performed on the primary host is blocked until at least one replica acknowledges it. Post the acknowledgement from any one replica, the control is returned to the session that performed the transaction. This ensures strong consistency but the replication is slower than asynchronous. Delayed - we can deliberately lag the replica in a typical MySQL replication by the number of seconds desired by the use case. This type of replication safeguards from severe human errors of dropping or corrupting the data on the primary, for example, in the above diagram for Delayed Replication, if a DROP DATABASE is executed by mistake on the primary, we still have 30 minutes to recover the data from R2 as that command has not been replicated on R2 yet. Pre-Requisites Before we dive into setting up replication, we should know about the binary logs. Binary logs play a very important role in MySQL replication. Binary logs, or commonly known as binlogs contain events about the changes done to the database, like table structure changes, data changes via DML operations, etc. They are not used to log SELECT statements. For replication, the primary sends the information to the replicas using its binlogs about the changes done to the database, and the replicas make the same data changes. With respect to MySQL replication, the binary log format can be of two types that decides the main type of replication:- - Statement-Based Replication or SBR - Row-Based Replication or RBR Statement Based Binlog Format Originally, the replication in MySQL was based on SQL statements getting replicated and executed on the replica from the primary. This is called statement based logging. The binlog contains the exact SQL statement run by the session. So If we run the above statements to insert 3 records and the update 3 in a single update statement, they will be logged exactly the same as when we executed them. Row Based Binlog Format The Row based is the default one in the latest MySQL releases. This is a lot different from the Statement format as here, row events are logged instead of statements. By that we mean, in the above example one update statement affected 3 records, but binlog had only one update statement; if it is a row based format, binlog will have an event for each record updated. Statement Based v/s Row Based binlogs Let\u2019s have a look at the operational differences between statement-based and row-based binlogs. Statement Based Row Based Logs SQL statements as executed Logs row events based on SQL statements executed Takes lesser disk space Takes more disk space Restoring using binlogs is faster Restoring using binlogs is slower When used for replication, if any statement has a predefined function that has its own value, like sysdate(), uuid() etc, the output could be different on the replica which makes it inconsistent. Whatever is executed becomes a row event with values, so there will be no problem if such functions are used in SQL statements. Only statements are logged so no other row events are generated. A lot of events are generated when a table is copied into another using INSERT INTO SELECT. Note - There is another type of binlog format called Mixed . With mixed logging, statement based is used by default but it switches to row based in certain cases. If MySQL cannot guarantee that statement based logging is safe for the statements executed, it issues a warning and switches to row based for those statements. We will be using binary log format as Row for the entire replication topic. Replication in Motion The above figure indicates how a typical MySQL replication works. Replica_IO_Thread is responsible to fetch the binlog events from the primary binary logs to the replica On the Replica host, relay logs are created which are exact copies of the binary logs. If the binary logs on primary are in row format, the relay logs will be the same. Replica_SQL_Thread applies the relay logs on the replica MySQL server. If log-bin is enabled on the replica, then the replica will have its own binary logs as well. If log-slave-updates is enabled, then it will have the updates from the primary logged in the binlogs as well.","title":"MySQL Replication"},{"location":"level101/databases_sql/replication/#setting-up-replication","text":"In this section, we will set up a simple asynchronous replication. The binlogs will be in row based format. The replication will be set up on two fresh hosts with no prior data present. There are two different ways in which we can set up replication. Binlog based - Each replica keeps a record of the binlog coordinates on the primary - current binlog and position in the binlog till where it has read and processed. So, at a time different replicas might be reading different parts of the same binlog. GTID based - Every transaction gets an identifier called global transaction identifier or GTID. There is no need to keep the record of binlog coordinates, as long as the replica has all the GTIDs executed on the primary, it is consistent with the primary. A typical GTID is the server_uuid:# positive integer. We will set up a GTID based replication in the following section but will also discuss binlog based replication setup as well. Primary Host Configurations The following config parameters should be present in the primary my.cnf file for setting up GTID based replication. server-id - a unique ID for the mysql server log-bin - the binlog location binlog-format - ROW | STATEMENT (we will use ROW) gtid-mode - ON enforce-gtid-consistency - ON (allows execution of only those statements which can be logged using GTIDs) Replica Host Configurations The following config parameters should be present in the replica my.cnf file for setting up replication. server-id - different than the primary host log-bin - (optional, if you want replica to log its own changes as well) binlog-format - depends on the above gtid-mode - ON enforce-gtid-consistency - ON log-slave-updates - ON (if binlog is enabled, then we can enable this. This enables the replica to log the changes coming from the primary along with its own changes. Helps in setting up chain replication) Replication User Every replica connects to the primary using a mysql user for replicating. So there must be a mysql user account for the same on the primary host. Any user can be used for this purpose provided it has REPLICATION SLAVE privilege. If the sole purpose is replication then we can have a user with only the required privilege. On the primary host mysql> create user repl_user@ identified by 'xxxxx'; mysql> grant replication slave on *.* to repl_user@''; Obtaining Starting position from Primary Run the following command on the primary host mysql> show master status\\G *************************** 1. row *************************** File: mysql-bin.000001 Position: 73 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: e17d0920-d00e-11eb-a3e6-000d3aa00f87:1-3 1 row in set (0.00 sec) If we are working with binary log based replication, the top two output lines are the most important ones. That tells the current binlog on the primary host and till what position it has written. For fresh hosts we know that no data is written so we can directly set up replication using the very first binlog file and position 4. If we are setting up a replication from a backup, then that changes the way we obtain the starting position. For GTIDs, the executed_gtid_set is the value where primary is right now. Again, for a fresh setup, we don\u2019t have to specify anything about the starting point and it will start from the transaction id 1, but when we set up from a backup, the backup will contain the GTID positions till where backup has been taken. Setting up Replica The replication setup must know about the primary host, the user and password to connect, the binlog coordinates (for binlog based replication) or the GTID auto-position parameter. The following command is used for setting up change master to master_host = '', master_port = , master_user = 'repl_user', master_password = 'xxxxx', master_auto_position = 1; Note - the Change Master To command has been replaced with Change Replication Source To from Mysql 8.0.23 onwards, also all the master and slave keywords are replaced with source and replica . If it is binlog based replication, then instead of master_auto_position, we need to specify the binlog coordinates. master_log_file = 'mysql-bin.000001', master_log_pos = 4 Starting Replication and Check Status Now that everything is configured, we just need to start the replication on the replica via the following command start slave; OR from MySQL 8.0.23 onwards, start replica; Whether or not the replication is running successfully, we can determine by running the following command show slave status\\G OR from MySQL 8.0.23 onwards, show replica status\\G mysql> show replica status\\G *************************** 1. row *************************** Replica_IO_State: Waiting for master to send event Source_Host: Source_User: repl_user Source_Port: Connect_Retry: 60 Source_Log_File: mysql-bin.000001 Read_Source_Log_Pos: 852 Relay_Log_File: mysql-relay-bin.000002 Relay_Log_Pos: 1067 Relay_Source_Log_File: mysql-bin.000001 Replica_IO_Running: Yes Replica_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Source_Log_Pos: 852 Relay_Log_Space: 1283 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Source_SSL_Allowed: No Source_SSL_CA_File: Source_SSL_CA_Path: Source_SSL_Cert: Source_SSL_Cipher: Source_SSL_Key: Seconds_Behind_Source: 0 Source_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Source_Server_Id: 1 Source_UUID: e17d0920-d00e-11eb-a3e6-000d3aa00f87 Source_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Replica_SQL_Running_State: Slave has read all relay log; waiting for more updates Source_Retry_Count: 86400 Source_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Source_SSL_Crl: Source_SSL_Crlpath: Retrieved_Gtid_Set: e17d0920-d00e-11eb-a3e6-000d3aa00f87:1-3 Executed_Gtid_Set: e17d0920-d00e-11eb-a3e6-000d3aa00f87:1-3 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: Source_TLS_Version: Source_public_key_path: Get_Source_public_key: 0 Network_Namespace: 1 row in set (0.00 sec) Some of the parameters are explained below:- Relay_Source_Log_File - the primary\u2019s file where replica is currently reading from Execute_Source_Log_Pos - for the above file on which position is the replica reading currently from. These two parameters are of utmost importance when binlog based replication is used. Replica_IO_Running - IO thread of replica is running or not Replica_SQL_Running - SQL thread of replica is running or not Seconds_Behind_Source - the difference of seconds when a statement was executed on Primary and then on Replica. This indicates how much replication lag is there. Source_UUID - the uuid of the primary host Retrieved_Gtid_Set - the GTIDs fetched from the primary host by the replica to be executed. Executed_Gtid_Set - the GTIDs executed on the replica. This set remains the same for the entire cluster if the replicas are in sync. Auto_Position - it directs the replica to fetch the next GTID automatically Create a Replica for the already setup cluster The steps discussed in the previous section talks about the setting up replication on two fresh hosts. When we have to set up a replica for a host which is already serving applications, then the backup of the primary is used, either fresh backup taken for the replica (should only be done if the traffic it is serving is less) or use a recently taken backup. If the size of the databases on the MySQL primary server is small, less than 100G recommended, then mysqldump can be used to take backup along with the following options. mysqldump -uroot -p -hhost_ip -P3306 --all-databases --single-transaction --master-data=1 > primary_host.bkp --single-transaction - this option starts a transaction before taking the backup which ensures it is consistent. As transactions are isolated from each other, so no other writes affect the backup. --master-data - this option is required if binlog based replication is desired to be set up. It includes the binary log file and log file position in the backup file. When GTID mode is enabled and mysqldump is executed, it includes the GTID executed to be used to start the replica after the backup position. The contents of the mysqldump output file will have the following It is recommended to comment these before restoring otherwise they could throw errors. Also, using master-data=2 will automatically comment the master_log_file line. Similarly, when taking backup of the host using xtrabackup , the file xtrabckup_info file contains the information about binlog file and file position, as well as the GTID executed set. server_version = 8.0.25 start_time = 2021-06-22 03:45:17 end_time = 2021-06-22 03:45:20 lock_time = 0 binlog_pos = filename 'mysql-bin.000007', position '196', GTID of the last change 'e17d0920-d00e-11eb-a3e6-000d3aa00f87:1-5' innodb_from_lsn = 0 innodb_to_lsn = 18153149 partial = N incremental = N format = file compressed = N encrypted = N Now, after setting MySQL server on the desired host, restore the backup taken from any one of the above methods. If the intended way is binlog based replication, then use the binlog file and position info in the following command change Replication Source to source_host = \u2018primary_ip\u2019, source_port = 3306, source_user = \u2018repl_user\u2019, source_password = \u2018xxxxx\u2019, source_log_file = \u2018mysql-bin.000007\u2019, source_log_pos = \u2018196\u2019; If the replication needs to be set via GITDs, then run the below command to tell the replica about the GTIDs already executed. On the Replica host, run th following commands reset master; set global gtid_purged = \u2018e17d0920-d00e-11eb-a3e6-000d3aa00f87:1-5\u2019 change replication source to source_host = \u2018primary_ip\u2019, source_port = 3306, source_user = \u2018repl_user\u2019, source_password = \u2018xxxxx\u2019, source_auto_position = 1 The reset master command resets the position of the binary log to initial. It can be skipped if the host is a freshly installed MySQL, but we restored a backup so it is necessary. The gtid_purged global variable lets the replica know the GTIDs that have already been executed, so that the replication can start after that. Then in the change source command, we set the auto-position to 1 which automatically gets the next GTID to proceed.","title":"Setting up Replication"},{"location":"level101/databases_sql/replication/#further-reading","text":"More applications of Replication Automtaed Failovers using MySQL Orchestrator","title":"Further Reading"},{"location":"level101/databases_sql/select_query/","text":"SELECT Query The most commonly used command while working with MySQL is SELECT. It is used to fetch the result set from one or more tables. The general form of a typical select query looks like:- SELECT expr FROM table1 [WHERE condition] [GROUP BY column_list HAVING condition] [ORDER BY column_list ASC|DESC] [LIMIT #] The above general form contains some commonly used clauses of a SELECT query:- expr - comma-separated column list or * (for all columns) WHERE - a condition is provided, if true, directs the query to select only those records. GROUP BY - groups the entire result set based on the column list provided. An aggregate function is recommended to be present in the select expression of the query. HAVING supports grouping by putting a condition on the selected or any other aggregate function. ORDER BY - sorts the result set based on the column list in ascending or descending order. LIMIT - commonly used to limit the number of records. Let\u2019s have a look at some examples for a better understanding of the above. The dataset used for the examples below is available here and is free to use. Select all records mysql> select * from employees limit 5; +--------+------------+------------+-----------+--------+------------+ | emp_no | birth_date | first_name | last_name | gender | hire_date | +--------+------------+------------+-----------+--------+------------+ | 10001 | 1953-09-02 | Georgi | Facello | M | 1986-06-26 | | 10002 | 1964-06-02 | Bezalel | Simmel | F | 1985-11-21 | | 10003 | 1959-12-03 | Parto | Bamford | M | 1986-08-28 | | 10004 | 1954-05-01 | Chirstian | Koblick | M | 1986-12-01 | | 10005 | 1955-01-21 | Kyoichi | Maliniak | M | 1989-09-12 | +--------+------------+------------+-----------+--------+------------+ 5 rows in set (0.00 sec) Select specific fields for all records mysql> select first_name, last_name, gender from employees limit 5; +------------+-----------+--------+ | first_name | last_name | gender | +------------+-----------+--------+ | Georgi | Facello | M | | Bezalel | Simmel | F | | Parto | Bamford | M | | Chirstian | Koblick | M | | Kyoichi | Maliniak | M | +------------+-----------+--------+ 5 rows in set (0.00 sec) Select all records Where hire_date >= January 1, 1990 mysql> select * from employees where hire_date >= '1990-01-01' limit 5; +--------+------------+------------+-------------+--------+------------+ | emp_no | birth_date | first_name | last_name | gender | hire_date | +--------+------------+------------+-------------+--------+------------+ | 10008 | 1958-02-19 | Saniya | Kalloufi | M | 1994-09-15 | | 10011 | 1953-11-07 | Mary | Sluis | F | 1990-01-22 | | 10012 | 1960-10-04 | Patricio | Bridgland | M | 1992-12-18 | | 10016 | 1961-05-02 | Kazuhito | Cappelletti | M | 1995-01-27 | | 10017 | 1958-07-06 | Cristinel | Bouloucos | F | 1993-08-03 | +--------+------------+------------+-------------+--------+------------+ 5 rows in set (0.01 sec) Select first_name and last_name from all records Where birth_date >= 1960 AND gender = \u2018F\u2019 mysql> select first_name, last_name from employees where year(birth_date) >= 1960 and gender='F' limit 5; +------------+-----------+ | first_name | last_name | +------------+-----------+ | Bezalel | Simmel | | Duangkaew | Piveteau | | Divier | Reistad | | Jeong | Reistad | | Mingsen | Casley | +------------+-----------+ 5 rows in set (0.00 sec) Display the total number of records mysql> select count(*) from employees; +----------+ | count(*) | +----------+ | 300024 | +----------+ 1 row in set (0.05 sec) Display gender-wise count of all records mysql> select gender, count(*) from employees group by gender; +--------+----------+ | gender | count(*) | +--------+----------+ | M | 179973 | | F | 120051 | +--------+----------+ 2 rows in set (0.14 sec) Display the year of hire_date and number of employees hired that year, also only those years where more than 20k employees were hired mysql> select year(hire_date), count(*) from employees group by year(hire_date) having count(*) > 20000; +-----------------+----------+ | year(hire_date) | count(*) | +-----------------+----------+ | 1985 | 35316 | | 1986 | 36150 | | 1987 | 33501 | | 1988 | 31436 | | 1989 | 28394 | | 1990 | 25610 | | 1991 | 22568 | | 1992 | 20402 | +-----------------+----------+ 8 rows in set (0.14 sec) Display all records ordered by their hire_date in descending order. If hire_date is the same then in order of their birth_date ascending order mysql> select * from employees order by hire_date desc, birth_date asc limit 5; +--------+------------+------------+-----------+--------+------------+ | emp_no | birth_date | first_name | last_name | gender | hire_date | +--------+------------+------------+-----------+--------+------------+ | 463807 | 1964-06-12 | Bikash | Covnot | M | 2000-01-28 | | 428377 | 1957-05-09 | Yucai | Gerlach | M | 2000-01-23 | | 499553 | 1954-05-06 | Hideyuki | Delgrande | F | 2000-01-22 | | 222965 | 1959-08-07 | Volkmar | Perko | F | 2000-01-13 | | 47291 | 1960-09-09 | Ulf | Flexer | M | 2000-01-12 | +--------+------------+------------+-----------+--------+------------+ 5 rows in set (0.12 sec) SELECT - JOINS JOIN statement is used to produce a combined result set from two or more tables based on certain conditions. It can be also used with Update and Delete statements but we will be focussing on the select query. Following is a basic general form for joins SELECT table1.col1, table2.col1, ... (any combination) FROM table1 table2 ON (or USING depends on join_type) table1.column_for_joining = table2.column_for_joining WHERE \u2026 Any number of columns can be selected, but it is recommended to select only those which are relevant to increase the readability of the resultset. All other clauses like where, group by are not mandatory. Let\u2019s discuss the types of JOINs supported by MySQL Syntax. Inner Join This joins table A with table B on a condition. Only the records where the condition is True are selected in the resultset. Display some details of employees along with their salary mysql> select e.emp_no,e.first_name,e.last_name,s.salary from employees e join salaries s on e.emp_no=s.emp_no limit 5; +--------+------------+-----------+--------+ | emp_no | first_name | last_name | salary | +--------+------------+-----------+--------+ | 10001 | Georgi | Facello | 60117 | | 10001 | Georgi | Facello | 62102 | | 10001 | Georgi | Facello | 66074 | | 10001 | Georgi | Facello | 66596 | | 10001 | Georgi | Facello | 66961 | +--------+------------+-----------+--------+ 5 rows in set (0.00 sec) Similar result can be achieved by mysql> select e.emp_no,e.first_name,e.last_name,s.salary from employees e join salaries s using (emp_no) limit 5; +--------+------------+-----------+--------+ | emp_no | first_name | last_name | salary | +--------+------------+-----------+--------+ | 10001 | Georgi | Facello | 60117 | | 10001 | Georgi | Facello | 62102 | | 10001 | Georgi | Facello | 66074 | | 10001 | Georgi | Facello | 66596 | | 10001 | Georgi | Facello | 66961 | +--------+------------+-----------+--------+ 5 rows in set (0.00 sec) And also by mysql> select e.emp_no,e.first_name,e.last_name,s.salary from employees e natural join salaries s limit 5; +--------+------------+-----------+--------+ | emp_no | first_name | last_name | salary | +--------+------------+-----------+--------+ | 10001 | Georgi | Facello | 60117 | | 10001 | Georgi | Facello | 62102 | | 10001 | Georgi | Facello | 66074 | | 10001 | Georgi | Facello | 66596 | | 10001 | Georgi | Facello | 66961 | +--------+------------+-----------+--------+ 5 rows in set (0.00 sec) Outer Join Majorly of two types:- - LEFT - joining complete table A with table B on a condition. All the records from table A are selected, but from table B, only those records are selected where the condition is True. - RIGHT - Exact opposite of the left join. Let us assume the below tables for understanding left join better. mysql> select * from dummy1; +----------+------------+ | same_col | diff_col_1 | +----------+------------+ | 1 | A | | 2 | B | | 3 | C | +----------+------------+ mysql> select * from dummy2; +----------+------------+ | same_col | diff_col_2 | +----------+------------+ | 1 | X | | 3 | Y | +----------+------------+ A simple select join will look like the one below. mysql> select * from dummy1 d1 left join dummy2 d2 on d1.same_col=d2.same_col; +----------+------------+----------+------------+ | same_col | diff_col_1 | same_col | diff_col_2 | +----------+------------+----------+------------+ | 1 | A | 1 | X | | 3 | C | 3 | Y | | 2 | B | NULL | NULL | +----------+------------+----------+------------+ 3 rows in set (0.00 sec) Which can also be written as mysql> select * from dummy1 d1 left join dummy2 d2 using(same_col); +----------+------------+------------+ | same_col | diff_col_1 | diff_col_2 | +----------+------------+------------+ | 1 | A | X | | 3 | C | Y | | 2 | B | NULL | +----------+------------+------------+ 3 rows in set (0.00 sec) And also as mysql> select * from dummy1 d1 natural left join dummy2 d2; +----------+------------+------------+ | same_col | diff_col_1 | diff_col_2 | +----------+------------+------------+ | 1 | A | X | | 3 | C | Y | | 2 | B | NULL | +----------+------------+------------+ 3 rows in set (0.00 sec) Cross Join This does a cross product of table A and table B without any condition. It doesn\u2019t have a lot of applications in the real world. A Simple Cross Join looks like this mysql> select * from dummy1 cross join dummy2; +----------+------------+----------+------------+ | same_col | diff_col_1 | same_col | diff_col_2 | +----------+------------+----------+------------+ | 1 | A | 3 | Y | | 1 | A | 1 | X | | 2 | B | 3 | Y | | 2 | B | 1 | X | | 3 | C | 3 | Y | | 3 | C | 1 | X | +----------+------------+----------+------------+ 6 rows in set (0.01 sec) One use case that can come in handy is when you have to fill in some missing entries. For example, all the entries from dummy1 must be inserted into a similar table dummy3, with each record must have 3 entries with statuses 1, 5 and 7. mysql> desc dummy3; +----------+----------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------+----------+------+-----+---------+-------+ | same_col | int | YES | | NULL | | | value | char(15) | YES | | NULL | | | status | smallint | YES | | NULL | | +----------+----------+------+-----+---------+-------+ 3 rows in set (0.02 sec) Either you create an insert query script with as many entries as in dummy1 or use cross join to produce the required resultset. mysql> select * from dummy1 cross join (select 1 union select 5 union select 7) T2 order by same_col; +----------+------------+---+ | same_col | diff_col_1 | 1 | +----------+------------+---+ | 1 | A | 1 | | 1 | A | 5 | | 1 | A | 7 | | 2 | B | 1 | | 2 | B | 5 | | 2 | B | 7 | | 3 | C | 1 | | 3 | C | 5 | | 3 | C | 7 | +----------+------------+---+ 9 rows in set (0.00 sec) The T2 section in the above query is called a sub-query . We will discuss the same in the next section. Natural Join This implicitly selects the common column from table A and table B and performs an inner join. mysql> select e.emp_no,e.first_name,e.last_name,s.salary from employees e natural join salaries s limit 5; +--------+------------+-----------+--------+ | emp_no | first_name | last_name | salary | +--------+------------+-----------+--------+ | 10001 | Georgi | Facello | 60117 | | 10001 | Georgi | Facello | 62102 | | 10001 | Georgi | Facello | 66074 | | 10001 | Georgi | Facello | 66596 | | 10001 | Georgi | Facello | 66961 | +--------+------------+-----------+--------+ 5 rows in set (0.00 sec) Notice how natural join and using takes care that the common column is displayed only once if you are not explicitly selecting columns for the query. Some More Examples Display emp_no, salary, title and dept of the employees where salary > 80000 mysql> select e.emp_no, s.salary, t.title, d.dept_no from employees e join salaries s using (emp_no) join titles t using (emp_no) join dept_emp d using (emp_no) where s.salary > 80000 limit 5; +--------+--------+--------------+---------+ | emp_no | salary | title | dept_no | +--------+--------+--------------+---------+ | 10017 | 82163 | Senior Staff | d001 | | 10017 | 86157 | Senior Staff | d001 | | 10017 | 89619 | Senior Staff | d001 | | 10017 | 91985 | Senior Staff | d001 | | 10017 | 96122 | Senior Staff | d001 | +--------+--------+--------------+---------+ 5 rows in set (0.00 sec) Display title-wise count of employees in each department order by dept_no mysql> select d.dept_no, t.title, count(*) from titles t left join dept_emp d using (emp_no) group by d.dept_no, t.title order by d.dept_no limit 10; +---------+--------------------+----------+ | dept_no | title | count(*) | +---------+--------------------+----------+ | d001 | Manager | 2 | | d001 | Senior Staff | 13940 | | d001 | Staff | 16196 | | d002 | Manager | 2 | | d002 | Senior Staff | 12139 | | d002 | Staff | 13929 | | d003 | Manager | 2 | | d003 | Senior Staff | 12274 | | d003 | Staff | 14342 | | d004 | Assistant Engineer | 6445 | +---------+--------------------+----------+ 10 rows in set (1.32 sec) SELECT - Subquery A subquery is generally a smaller resultset that can be used to power a select query in many ways. It can be used in a \u2018where\u2019 condition, can be used in place of join mostly where a join could be an overkill. These subqueries are also termed as derived tables. They must have a table alias in the select query. Let\u2019s look at some examples of subqueries. Here we got the department name from the departments table by a subquery which used dept_no from dept_emp table. mysql> select e.emp_no, (select dept_name from departments where dept_no=d.dept_no) dept_name from employees e join dept_emp d using (emp_no) limit 5; +--------+-----------------+ | emp_no | dept_name | +--------+-----------------+ | 10001 | Development | | 10002 | Sales | | 10003 | Production | | 10004 | Production | | 10005 | Human Resources | +--------+-----------------+ 5 rows in set (0.01 sec) Here, we used the \u2018avg\u2019 query above (which got the avg salary) as a subquery to list the employees whose latest salary is more than the average. mysql> select avg(salary) from salaries; +-------------+ | avg(salary) | +-------------+ | 63810.7448 | +-------------+ 1 row in set (0.80 sec) mysql> select e.emp_no, max(s.salary) from employees e natural join salaries s group by e.emp_no having max(s.salary) > (select avg(salary) from salaries) limit 10; +--------+---------------+ | emp_no | max(s.salary) | +--------+---------------+ | 10001 | 88958 | | 10002 | 72527 | | 10004 | 74057 | | 10005 | 94692 | | 10007 | 88070 | | 10009 | 94443 | | 10010 | 80324 | | 10013 | 68901 | | 10016 | 77935 | | 10017 | 99651 | +--------+---------------+ 10 rows in set (0.56 sec)","title":"Select Query"},{"location":"level101/databases_sql/select_query/#select-query","text":"The most commonly used command while working with MySQL is SELECT. It is used to fetch the result set from one or more tables. The general form of a typical select query looks like:- SELECT expr FROM table1 [WHERE condition] [GROUP BY column_list HAVING condition] [ORDER BY column_list ASC|DESC] [LIMIT #] The above general form contains some commonly used clauses of a SELECT query:- expr - comma-separated column list or * (for all columns) WHERE - a condition is provided, if true, directs the query to select only those records. GROUP BY - groups the entire result set based on the column list provided. An aggregate function is recommended to be present in the select expression of the query. HAVING supports grouping by putting a condition on the selected or any other aggregate function. ORDER BY - sorts the result set based on the column list in ascending or descending order. LIMIT - commonly used to limit the number of records. Let\u2019s have a look at some examples for a better understanding of the above. The dataset used for the examples below is available here and is free to use. Select all records mysql> select * from employees limit 5; +--------+------------+------------+-----------+--------+------------+ | emp_no | birth_date | first_name | last_name | gender | hire_date | +--------+------------+------------+-----------+--------+------------+ | 10001 | 1953-09-02 | Georgi | Facello | M | 1986-06-26 | | 10002 | 1964-06-02 | Bezalel | Simmel | F | 1985-11-21 | | 10003 | 1959-12-03 | Parto | Bamford | M | 1986-08-28 | | 10004 | 1954-05-01 | Chirstian | Koblick | M | 1986-12-01 | | 10005 | 1955-01-21 | Kyoichi | Maliniak | M | 1989-09-12 | +--------+------------+------------+-----------+--------+------------+ 5 rows in set (0.00 sec) Select specific fields for all records mysql> select first_name, last_name, gender from employees limit 5; +------------+-----------+--------+ | first_name | last_name | gender | +------------+-----------+--------+ | Georgi | Facello | M | | Bezalel | Simmel | F | | Parto | Bamford | M | | Chirstian | Koblick | M | | Kyoichi | Maliniak | M | +------------+-----------+--------+ 5 rows in set (0.00 sec) Select all records Where hire_date >= January 1, 1990 mysql> select * from employees where hire_date >= '1990-01-01' limit 5; +--------+------------+------------+-------------+--------+------------+ | emp_no | birth_date | first_name | last_name | gender | hire_date | +--------+------------+------------+-------------+--------+------------+ | 10008 | 1958-02-19 | Saniya | Kalloufi | M | 1994-09-15 | | 10011 | 1953-11-07 | Mary | Sluis | F | 1990-01-22 | | 10012 | 1960-10-04 | Patricio | Bridgland | M | 1992-12-18 | | 10016 | 1961-05-02 | Kazuhito | Cappelletti | M | 1995-01-27 | | 10017 | 1958-07-06 | Cristinel | Bouloucos | F | 1993-08-03 | +--------+------------+------------+-------------+--------+------------+ 5 rows in set (0.01 sec) Select first_name and last_name from all records Where birth_date >= 1960 AND gender = \u2018F\u2019 mysql> select first_name, last_name from employees where year(birth_date) >= 1960 and gender='F' limit 5; +------------+-----------+ | first_name | last_name | +------------+-----------+ | Bezalel | Simmel | | Duangkaew | Piveteau | | Divier | Reistad | | Jeong | Reistad | | Mingsen | Casley | +------------+-----------+ 5 rows in set (0.00 sec) Display the total number of records mysql> select count(*) from employees; +----------+ | count(*) | +----------+ | 300024 | +----------+ 1 row in set (0.05 sec) Display gender-wise count of all records mysql> select gender, count(*) from employees group by gender; +--------+----------+ | gender | count(*) | +--------+----------+ | M | 179973 | | F | 120051 | +--------+----------+ 2 rows in set (0.14 sec) Display the year of hire_date and number of employees hired that year, also only those years where more than 20k employees were hired mysql> select year(hire_date), count(*) from employees group by year(hire_date) having count(*) > 20000; +-----------------+----------+ | year(hire_date) | count(*) | +-----------------+----------+ | 1985 | 35316 | | 1986 | 36150 | | 1987 | 33501 | | 1988 | 31436 | | 1989 | 28394 | | 1990 | 25610 | | 1991 | 22568 | | 1992 | 20402 | +-----------------+----------+ 8 rows in set (0.14 sec) Display all records ordered by their hire_date in descending order. If hire_date is the same then in order of their birth_date ascending order mysql> select * from employees order by hire_date desc, birth_date asc limit 5; +--------+------------+------------+-----------+--------+------------+ | emp_no | birth_date | first_name | last_name | gender | hire_date | +--------+------------+------------+-----------+--------+------------+ | 463807 | 1964-06-12 | Bikash | Covnot | M | 2000-01-28 | | 428377 | 1957-05-09 | Yucai | Gerlach | M | 2000-01-23 | | 499553 | 1954-05-06 | Hideyuki | Delgrande | F | 2000-01-22 | | 222965 | 1959-08-07 | Volkmar | Perko | F | 2000-01-13 | | 47291 | 1960-09-09 | Ulf | Flexer | M | 2000-01-12 | +--------+------------+------------+-----------+--------+------------+ 5 rows in set (0.12 sec)","title":"SELECT Query"},{"location":"level101/databases_sql/select_query/#select-joins","text":"JOIN statement is used to produce a combined result set from two or more tables based on certain conditions. It can be also used with Update and Delete statements but we will be focussing on the select query. Following is a basic general form for joins SELECT table1.col1, table2.col1, ... (any combination) FROM table1 table2 ON (or USING depends on join_type) table1.column_for_joining = table2.column_for_joining WHERE \u2026 Any number of columns can be selected, but it is recommended to select only those which are relevant to increase the readability of the resultset. All other clauses like where, group by are not mandatory. Let\u2019s discuss the types of JOINs supported by MySQL Syntax. Inner Join This joins table A with table B on a condition. Only the records where the condition is True are selected in the resultset. Display some details of employees along with their salary mysql> select e.emp_no,e.first_name,e.last_name,s.salary from employees e join salaries s on e.emp_no=s.emp_no limit 5; +--------+------------+-----------+--------+ | emp_no | first_name | last_name | salary | +--------+------------+-----------+--------+ | 10001 | Georgi | Facello | 60117 | | 10001 | Georgi | Facello | 62102 | | 10001 | Georgi | Facello | 66074 | | 10001 | Georgi | Facello | 66596 | | 10001 | Georgi | Facello | 66961 | +--------+------------+-----------+--------+ 5 rows in set (0.00 sec) Similar result can be achieved by mysql> select e.emp_no,e.first_name,e.last_name,s.salary from employees e join salaries s using (emp_no) limit 5; +--------+------------+-----------+--------+ | emp_no | first_name | last_name | salary | +--------+------------+-----------+--------+ | 10001 | Georgi | Facello | 60117 | | 10001 | Georgi | Facello | 62102 | | 10001 | Georgi | Facello | 66074 | | 10001 | Georgi | Facello | 66596 | | 10001 | Georgi | Facello | 66961 | +--------+------------+-----------+--------+ 5 rows in set (0.00 sec) And also by mysql> select e.emp_no,e.first_name,e.last_name,s.salary from employees e natural join salaries s limit 5; +--------+------------+-----------+--------+ | emp_no | first_name | last_name | salary | +--------+------------+-----------+--------+ | 10001 | Georgi | Facello | 60117 | | 10001 | Georgi | Facello | 62102 | | 10001 | Georgi | Facello | 66074 | | 10001 | Georgi | Facello | 66596 | | 10001 | Georgi | Facello | 66961 | +--------+------------+-----------+--------+ 5 rows in set (0.00 sec) Outer Join Majorly of two types:- - LEFT - joining complete table A with table B on a condition. All the records from table A are selected, but from table B, only those records are selected where the condition is True. - RIGHT - Exact opposite of the left join. Let us assume the below tables for understanding left join better. mysql> select * from dummy1; +----------+------------+ | same_col | diff_col_1 | +----------+------------+ | 1 | A | | 2 | B | | 3 | C | +----------+------------+ mysql> select * from dummy2; +----------+------------+ | same_col | diff_col_2 | +----------+------------+ | 1 | X | | 3 | Y | +----------+------------+ A simple select join will look like the one below. mysql> select * from dummy1 d1 left join dummy2 d2 on d1.same_col=d2.same_col; +----------+------------+----------+------------+ | same_col | diff_col_1 | same_col | diff_col_2 | +----------+------------+----------+------------+ | 1 | A | 1 | X | | 3 | C | 3 | Y | | 2 | B | NULL | NULL | +----------+------------+----------+------------+ 3 rows in set (0.00 sec) Which can also be written as mysql> select * from dummy1 d1 left join dummy2 d2 using(same_col); +----------+------------+------------+ | same_col | diff_col_1 | diff_col_2 | +----------+------------+------------+ | 1 | A | X | | 3 | C | Y | | 2 | B | NULL | +----------+------------+------------+ 3 rows in set (0.00 sec) And also as mysql> select * from dummy1 d1 natural left join dummy2 d2; +----------+------------+------------+ | same_col | diff_col_1 | diff_col_2 | +----------+------------+------------+ | 1 | A | X | | 3 | C | Y | | 2 | B | NULL | +----------+------------+------------+ 3 rows in set (0.00 sec) Cross Join This does a cross product of table A and table B without any condition. It doesn\u2019t have a lot of applications in the real world. A Simple Cross Join looks like this mysql> select * from dummy1 cross join dummy2; +----------+------------+----------+------------+ | same_col | diff_col_1 | same_col | diff_col_2 | +----------+------------+----------+------------+ | 1 | A | 3 | Y | | 1 | A | 1 | X | | 2 | B | 3 | Y | | 2 | B | 1 | X | | 3 | C | 3 | Y | | 3 | C | 1 | X | +----------+------------+----------+------------+ 6 rows in set (0.01 sec) One use case that can come in handy is when you have to fill in some missing entries. For example, all the entries from dummy1 must be inserted into a similar table dummy3, with each record must have 3 entries with statuses 1, 5 and 7. mysql> desc dummy3; +----------+----------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------+----------+------+-----+---------+-------+ | same_col | int | YES | | NULL | | | value | char(15) | YES | | NULL | | | status | smallint | YES | | NULL | | +----------+----------+------+-----+---------+-------+ 3 rows in set (0.02 sec) Either you create an insert query script with as many entries as in dummy1 or use cross join to produce the required resultset. mysql> select * from dummy1 cross join (select 1 union select 5 union select 7) T2 order by same_col; +----------+------------+---+ | same_col | diff_col_1 | 1 | +----------+------------+---+ | 1 | A | 1 | | 1 | A | 5 | | 1 | A | 7 | | 2 | B | 1 | | 2 | B | 5 | | 2 | B | 7 | | 3 | C | 1 | | 3 | C | 5 | | 3 | C | 7 | +----------+------------+---+ 9 rows in set (0.00 sec) The T2 section in the above query is called a sub-query . We will discuss the same in the next section. Natural Join This implicitly selects the common column from table A and table B and performs an inner join. mysql> select e.emp_no,e.first_name,e.last_name,s.salary from employees e natural join salaries s limit 5; +--------+------------+-----------+--------+ | emp_no | first_name | last_name | salary | +--------+------------+-----------+--------+ | 10001 | Georgi | Facello | 60117 | | 10001 | Georgi | Facello | 62102 | | 10001 | Georgi | Facello | 66074 | | 10001 | Georgi | Facello | 66596 | | 10001 | Georgi | Facello | 66961 | +--------+------------+-----------+--------+ 5 rows in set (0.00 sec) Notice how natural join and using takes care that the common column is displayed only once if you are not explicitly selecting columns for the query. Some More Examples Display emp_no, salary, title and dept of the employees where salary > 80000 mysql> select e.emp_no, s.salary, t.title, d.dept_no from employees e join salaries s using (emp_no) join titles t using (emp_no) join dept_emp d using (emp_no) where s.salary > 80000 limit 5; +--------+--------+--------------+---------+ | emp_no | salary | title | dept_no | +--------+--------+--------------+---------+ | 10017 | 82163 | Senior Staff | d001 | | 10017 | 86157 | Senior Staff | d001 | | 10017 | 89619 | Senior Staff | d001 | | 10017 | 91985 | Senior Staff | d001 | | 10017 | 96122 | Senior Staff | d001 | +--------+--------+--------------+---------+ 5 rows in set (0.00 sec) Display title-wise count of employees in each department order by dept_no mysql> select d.dept_no, t.title, count(*) from titles t left join dept_emp d using (emp_no) group by d.dept_no, t.title order by d.dept_no limit 10; +---------+--------------------+----------+ | dept_no | title | count(*) | +---------+--------------------+----------+ | d001 | Manager | 2 | | d001 | Senior Staff | 13940 | | d001 | Staff | 16196 | | d002 | Manager | 2 | | d002 | Senior Staff | 12139 | | d002 | Staff | 13929 | | d003 | Manager | 2 | | d003 | Senior Staff | 12274 | | d003 | Staff | 14342 | | d004 | Assistant Engineer | 6445 | +---------+--------------------+----------+ 10 rows in set (1.32 sec)","title":"SELECT - JOINS"},{"location":"level101/databases_sql/select_query/#select-subquery","text":"A subquery is generally a smaller resultset that can be used to power a select query in many ways. It can be used in a \u2018where\u2019 condition, can be used in place of join mostly where a join could be an overkill. These subqueries are also termed as derived tables. They must have a table alias in the select query. Let\u2019s look at some examples of subqueries. Here we got the department name from the departments table by a subquery which used dept_no from dept_emp table. mysql> select e.emp_no, (select dept_name from departments where dept_no=d.dept_no) dept_name from employees e join dept_emp d using (emp_no) limit 5; +--------+-----------------+ | emp_no | dept_name | +--------+-----------------+ | 10001 | Development | | 10002 | Sales | | 10003 | Production | | 10004 | Production | | 10005 | Human Resources | +--------+-----------------+ 5 rows in set (0.01 sec) Here, we used the \u2018avg\u2019 query above (which got the avg salary) as a subquery to list the employees whose latest salary is more than the average. mysql> select avg(salary) from salaries; +-------------+ | avg(salary) | +-------------+ | 63810.7448 | +-------------+ 1 row in set (0.80 sec) mysql> select e.emp_no, max(s.salary) from employees e natural join salaries s group by e.emp_no having max(s.salary) > (select avg(salary) from salaries) limit 10; +--------+---------------+ | emp_no | max(s.salary) | +--------+---------------+ | 10001 | 88958 | | 10002 | 72527 | | 10004 | 74057 | | 10005 | 94692 | | 10007 | 88070 | | 10009 | 94443 | | 10010 | 80324 | | 10013 | 68901 | | 10016 | 77935 | | 10017 | 99651 | +--------+---------------+ 10 rows in set (0.56 sec)","title":"SELECT - Subquery"},{"location":"level101/git/branches/","text":"Working With Branches Coming back to our local repo which has two commits. So far, what we have is a single line of history. Commits are chained in a single line. But sometimes you may have a need to work on two different features in parallel in the same repo. Now one option here could be making a new folder/repo with the same code and use that for another feature development. But there's a better way. Use branches. Since git follows tree like structure for commits, we can use branches to work on different sets of features. From a commit, two or more branches can be created and branches can also be merged. Using branches, there can exist multiple lines of histories and we can checkout to any of them and work on it. Checking out, as we discussed earlier, would simply mean replacing contents of the directory (repo) with the snapshot at the checked out version. Let's create a branch and see how it looks like: $ git branch b1 $ git log --oneline --graph * 7f3b00e (HEAD -> master, b1) adding file 2 * df2fb7a adding file 1 We create a branch called b1 . Git log tells us that b1 also points to the last commit (7f3b00e) but the HEAD is still pointing to master. If you remember, HEAD points to the commit/reference wherever you are checkout to. So if we checkout to b1 , HEAD should point to that. Let's confirm: $ git checkout b1 Switched to branch 'b1' $ git log --oneline --graph * 7f3b00e (HEAD -> b1, master) adding file 2 * df2fb7a adding file 1 b1 still points to the same commit but HEAD now points to b1 . Since we create a branch at commit 7f3b00e , there will be two lines of histories starting this commit. Depending on which branch you are checked out on, the line of history will progress. At this moment, we are checked out on branch b1 , so making a new commit will advance branch reference b1 to that commit and current b1 commit will become its parent. Let's do that. # Creating a file and making a commit $ echo \"I am a file in b1 branch\" > b1.txt $ git add b1.txt $ git commit -m \"adding b1 file\" [b1 872a38f] adding b1 file 1 file changed, 1 insertion(+) create mode 100644 b1.txt # The new line of history $ git log --oneline --graph * 872a38f (HEAD -> b1) adding b1 file * 7f3b00e (master) adding file 2 * df2fb7a adding file 1 $ Do note that master is still pointing to the old commit it was pointing to. We can now checkout to master branch and make commits there. This will result in another line of history starting from commit 7f3b00e. # checkout to master branch $ git checkout master Switched to branch 'master' # Creating a new commit on master branch $ echo \"new file in master branch\" > master.txt $ git add master.txt $ git commit -m \"adding master.txt file\" [master 60dc441] adding master.txt file 1 file changed, 1 insertion(+) create mode 100644 master.txt # The history line $ git log --oneline --graph * 60dc441 (HEAD -> master) adding master.txt file * 7f3b00e adding file 2 * df2fb7a adding file 1 Notice how branch b1 is not visible here since we are on the master. Let's try to visualize both to get the whole picture: $ git log --oneline --graph --all * 60dc441 (HEAD -> master) adding master.txt file | * 872a38f (b1) adding b1 file |/ * 7f3b00e adding file 2 * df2fb7a adding file 1 Above tree structure should make things clear. Notice a clear branch/fork on commit 7f3b00e. This is how we create branches. Now they both are two separate lines of history on which feature development can be done independently. To reiterate, internally, git is just a tree of commits. Branch names (human readable) are pointers to those commits in the tree. We use various git commands to work with the tree structure and references. Git accordingly modifies contents of our repo. Merges Now say the feature you were working on branch b1 is complete and you need to merge it on master branch, where all the final version of code goes. So first you will checkout to branch master and then you pull the latest code from upstream (eg: GitHub). Then you need to merge your code from b1 into master. There could be two ways this can be done. Here is the current history: $ git log --oneline --graph --all * 60dc441 (HEAD -> master) adding master.txt file | * 872a38f (b1) adding b1 file |/ * 7f3b00e adding file 2 * df2fb7a adding file 1 Option 1: Directly merge the branch. Merging the branch b1 into master will result in a new merge commit. This will merge changes from two different lines of history and create a new commit of the result. $ git merge b1 Merge made by the 'recursive' strategy. b1.txt | 1 + 1 file changed, 1 insertion(+) create mode 100644 b1.txt $ git log --oneline --graph --all * 8fc28f9 (HEAD -> master) Merge branch 'b1' |\\ | * 872a38f (b1) adding b1 file * | 60dc441 adding master.txt file |/ * 7f3b00e adding file 2 * df2fb7a adding file 1 You can see a new merge commit created (8fc28f9). You will be prompted for the commit message. If there are a lot of branches in the repo, this result will end-up with a lot of merge commits. Which looks ugly compared to a single line of history of development. So let's look at an alternative approach First let's reset our last merge and go to the previous state. $ git reset --hard 60dc441 HEAD is now at 60dc441 adding master.txt file $ git log --oneline --graph --all * 60dc441 (HEAD -> master) adding master.txt file | * 872a38f (b1) adding b1 file |/ * 7f3b00e adding file 2 * df2fb7a adding file 1 Option 2: Rebase. Now, instead of merging two branches which has a similar base (commit: 7f3b00e), let us rebase branch b1 on to current master. What this means is take branch b1 (from commit 7f3b00e to commit 872a38f) and rebase (put them on top of) master (60dc441). # Switch to b1 $ git checkout b1 Switched to branch 'b1' # Rebase (b1 which is current branch) on master $ git rebase master First, rewinding head to replay your work on top of it... Applying: adding b1 file # The result $ git log --oneline --graph --all * 5372c8f (HEAD -> b1) adding b1 file * 60dc441 (master) adding master.txt file * 7f3b00e adding file 2 * df2fb7a adding file 1 You can see b1 which had 1 commit. That commit's parent was 7f3b00e . But since we rebase it on master ( 60dc441 ). That becomes the parent now. As a side effect, you also see it has become a single line of history. Now if we were to merge b1 into master , it would simply mean change master to point to 5372c8f which is b1 . Let's try it: # checkout to master since we want to merge code into master $ git checkout master Switched to branch 'master' # the current history, where b1 is based on master $ git log --oneline --graph --all * 5372c8f (b1) adding b1 file * 60dc441 (HEAD -> master) adding master.txt file * 7f3b00e adding file 2 * df2fb7a adding file 1 # Performing the merge, notice the \"fast-forward\" message $ git merge b1 Updating 60dc441..5372c8f Fast-forward b1.txt | 1 + 1 file changed, 1 insertion(+) create mode 100644 b1.txt # The Result $ git log --oneline --graph --all * 5372c8f (HEAD -> master, b1) adding b1 file * 60dc441 adding master.txt file * 7f3b00e adding file 2 * df2fb7a adding file 1 Now you see both b1 and master are pointing to the same commit. Your code has been merged to the master branch and it can be pushed. Also we have clean line of history! :D","title":"Working With Branches"},{"location":"level101/git/branches/#working-with-branches","text":"Coming back to our local repo which has two commits. So far, what we have is a single line of history. Commits are chained in a single line. But sometimes you may have a need to work on two different features in parallel in the same repo. Now one option here could be making a new folder/repo with the same code and use that for another feature development. But there's a better way. Use branches. Since git follows tree like structure for commits, we can use branches to work on different sets of features. From a commit, two or more branches can be created and branches can also be merged. Using branches, there can exist multiple lines of histories and we can checkout to any of them and work on it. Checking out, as we discussed earlier, would simply mean replacing contents of the directory (repo) with the snapshot at the checked out version. Let's create a branch and see how it looks like: $ git branch b1 $ git log --oneline --graph * 7f3b00e (HEAD -> master, b1) adding file 2 * df2fb7a adding file 1 We create a branch called b1 . Git log tells us that b1 also points to the last commit (7f3b00e) but the HEAD is still pointing to master. If you remember, HEAD points to the commit/reference wherever you are checkout to. So if we checkout to b1 , HEAD should point to that. Let's confirm: $ git checkout b1 Switched to branch 'b1' $ git log --oneline --graph * 7f3b00e (HEAD -> b1, master) adding file 2 * df2fb7a adding file 1 b1 still points to the same commit but HEAD now points to b1 . Since we create a branch at commit 7f3b00e , there will be two lines of histories starting this commit. Depending on which branch you are checked out on, the line of history will progress. At this moment, we are checked out on branch b1 , so making a new commit will advance branch reference b1 to that commit and current b1 commit will become its parent. Let's do that. # Creating a file and making a commit $ echo \"I am a file in b1 branch\" > b1.txt $ git add b1.txt $ git commit -m \"adding b1 file\" [b1 872a38f] adding b1 file 1 file changed, 1 insertion(+) create mode 100644 b1.txt # The new line of history $ git log --oneline --graph * 872a38f (HEAD -> b1) adding b1 file * 7f3b00e (master) adding file 2 * df2fb7a adding file 1 $ Do note that master is still pointing to the old commit it was pointing to. We can now checkout to master branch and make commits there. This will result in another line of history starting from commit 7f3b00e. # checkout to master branch $ git checkout master Switched to branch 'master' # Creating a new commit on master branch $ echo \"new file in master branch\" > master.txt $ git add master.txt $ git commit -m \"adding master.txt file\" [master 60dc441] adding master.txt file 1 file changed, 1 insertion(+) create mode 100644 master.txt # The history line $ git log --oneline --graph * 60dc441 (HEAD -> master) adding master.txt file * 7f3b00e adding file 2 * df2fb7a adding file 1 Notice how branch b1 is not visible here since we are on the master. Let's try to visualize both to get the whole picture: $ git log --oneline --graph --all * 60dc441 (HEAD -> master) adding master.txt file | * 872a38f (b1) adding b1 file |/ * 7f3b00e adding file 2 * df2fb7a adding file 1 Above tree structure should make things clear. Notice a clear branch/fork on commit 7f3b00e. This is how we create branches. Now they both are two separate lines of history on which feature development can be done independently. To reiterate, internally, git is just a tree of commits. Branch names (human readable) are pointers to those commits in the tree. We use various git commands to work with the tree structure and references. Git accordingly modifies contents of our repo.","title":"Working With Branches"},{"location":"level101/git/branches/#merges","text":"Now say the feature you were working on branch b1 is complete and you need to merge it on master branch, where all the final version of code goes. So first you will checkout to branch master and then you pull the latest code from upstream (eg: GitHub). Then you need to merge your code from b1 into master. There could be two ways this can be done. Here is the current history: $ git log --oneline --graph --all * 60dc441 (HEAD -> master) adding master.txt file | * 872a38f (b1) adding b1 file |/ * 7f3b00e adding file 2 * df2fb7a adding file 1 Option 1: Directly merge the branch. Merging the branch b1 into master will result in a new merge commit. This will merge changes from two different lines of history and create a new commit of the result. $ git merge b1 Merge made by the 'recursive' strategy. b1.txt | 1 + 1 file changed, 1 insertion(+) create mode 100644 b1.txt $ git log --oneline --graph --all * 8fc28f9 (HEAD -> master) Merge branch 'b1' |\\ | * 872a38f (b1) adding b1 file * | 60dc441 adding master.txt file |/ * 7f3b00e adding file 2 * df2fb7a adding file 1 You can see a new merge commit created (8fc28f9). You will be prompted for the commit message. If there are a lot of branches in the repo, this result will end-up with a lot of merge commits. Which looks ugly compared to a single line of history of development. So let's look at an alternative approach First let's reset our last merge and go to the previous state. $ git reset --hard 60dc441 HEAD is now at 60dc441 adding master.txt file $ git log --oneline --graph --all * 60dc441 (HEAD -> master) adding master.txt file | * 872a38f (b1) adding b1 file |/ * 7f3b00e adding file 2 * df2fb7a adding file 1 Option 2: Rebase. Now, instead of merging two branches which has a similar base (commit: 7f3b00e), let us rebase branch b1 on to current master. What this means is take branch b1 (from commit 7f3b00e to commit 872a38f) and rebase (put them on top of) master (60dc441). # Switch to b1 $ git checkout b1 Switched to branch 'b1' # Rebase (b1 which is current branch) on master $ git rebase master First, rewinding head to replay your work on top of it... Applying: adding b1 file # The result $ git log --oneline --graph --all * 5372c8f (HEAD -> b1) adding b1 file * 60dc441 (master) adding master.txt file * 7f3b00e adding file 2 * df2fb7a adding file 1 You can see b1 which had 1 commit. That commit's parent was 7f3b00e . But since we rebase it on master ( 60dc441 ). That becomes the parent now. As a side effect, you also see it has become a single line of history. Now if we were to merge b1 into master , it would simply mean change master to point to 5372c8f which is b1 . Let's try it: # checkout to master since we want to merge code into master $ git checkout master Switched to branch 'master' # the current history, where b1 is based on master $ git log --oneline --graph --all * 5372c8f (b1) adding b1 file * 60dc441 (HEAD -> master) adding master.txt file * 7f3b00e adding file 2 * df2fb7a adding file 1 # Performing the merge, notice the \"fast-forward\" message $ git merge b1 Updating 60dc441..5372c8f Fast-forward b1.txt | 1 + 1 file changed, 1 insertion(+) create mode 100644 b1.txt # The Result $ git log --oneline --graph --all * 5372c8f (HEAD -> master, b1) adding b1 file * 60dc441 adding master.txt file * 7f3b00e adding file 2 * df2fb7a adding file 1 Now you see both b1 and master are pointing to the same commit. Your code has been merged to the master branch and it can be pushed. Also we have clean line of history! :D","title":"Merges"},{"location":"level101/git/conclusion/","text":"What next from here? There are a lot of git commands and features which we have not explored here. But with the base built-up, be sure to explore concepts like Cherrypick Squash Amend Stash Reset","title":"Conclusion"},{"location":"level101/git/conclusion/#what-next-from-here","text":"There are a lot of git commands and features which we have not explored here. But with the base built-up, be sure to explore concepts like Cherrypick Squash Amend Stash Reset","title":"What next from here?"},{"location":"level101/git/git-basics/","text":"Git Prerequisites Have Git installed https://git-scm.com/downloads Have taken any git high level tutorial or following LinkedIn learning courses https://www.linkedin.com/learning/git-essential-training-the-basics/ https://www.linkedin.com/learning/git-branches-merges-and-remotes/ The Official Git Docs What to expect from this course As an engineer in the field of computer science, having knowledge of version control tools becomes almost a requirement. While there are a lot of version control tools that exist today like SVN, Mercurial, etc, Git perhaps is the most used one and this course we will be working with Git. While this course does not start with Git 101 and expects basic knowledge of git as a prerequisite, it will reintroduce the git concepts known by you with details covering what is happening under the hood as you execute various git commands. So that next time you run a git command, you will be able to press enter more confidently! What is not covered under this course Advanced usage and specifics of internal implementation details of Git. Course Contents Git Basics Working with Branches Git with Github Hooks Git Basics Though you might be aware already, let's revisit why we need a version control system. As the project grows and multiple developers start working on it, an efficient method for collaboration is warranted. Git helps the team collaborate easily and also maintains the history of the changes happening with the codebase. Creating a Git Repo Any folder can be converted into a git repository. After executing the following command, we will see a .git folder within the folder, which makes our folder a git repository. All the magic that git does, .git folder is the enabler for the same. # creating an empty folder and changing current dir to it $ cd /tmp $ mkdir school-of-sre $ cd school-of-sre/ # initialize a git repo $ git init Initialized empty Git repository in /private/tmp/school-of-sre/.git/ As the output says, an empty git repo has been initialized in our folder. Let's take a look at what is there. $ ls .git/ HEAD config description hooks info objects refs There are a bunch of folders and files in the .git folder. As I said, all these enables git to do its magic. We will look into some of these folders and files. But for now, what we have is an empty git repository. Tracking a File Now as you might already know, let us create a new file in our repo (we will refer to the folder as repo now.) And see git status $ echo \"I am file 1\" > file1.txt $ git status On branch master No commits yet Untracked files: (use \"git add ...\" to include in what will be committed) file1.txt nothing added to commit but untracked files present (use \"git add\" to track) The current git status says No commits yet and there is one untracked file. Since we just created the file, git is not tracking that file. We explicitly need to ask git to track files and folders. (also checkout gitignore ) And how we do that is via git add command as suggested in the above output. Then we go ahead and create a commit. $ git add file1.txt $ git status On branch master No commits yet Changes to be committed: (use \"git rm --cached ...\" to unstage) new file: file1.txt $ git commit -m \"adding file 1\" [master (root-commit) df2fb7a] adding file 1 1 file changed, 1 insertion(+) create mode 100644 file1.txt Notice how after adding the file, git status says Changes to be committed: . What it means is whatever is listed there, will be included in the next commit. Then we go ahead and create a commit, with an attached messaged via -m . More About a Commit Commit is a snapshot of the repo. Whenever a commit is made, a snapshot of the current state of repo (the folder) is taken and saved. Each commit has a unique ID. ( df2fb7a for the commit we made in the previous step). As we keep adding/changing more and more contents and keep making commits, all those snapshots are stored by git. Again, all this magic happens inside the .git folder. This is where all this snapshot or versions are stored in an efficient manner. Adding More Changes Let us create one more file and commit the change. It would look the same as the previous commit we made. $ echo \"I am file 2\" > file2.txt $ git add file2.txt $ git commit -m \"adding file 2\" [master 7f3b00e] adding file 2 1 file changed, 1 insertion(+) create mode 100644 file2.txt A new commit with ID 7f3b00e has been created. You can issue git status at any time to see the state of the repository. **IMPORTANT: Note that commit IDs are long string (SHA) but we can refer to a commit by its initial few (8 or more) characters too. We will interchangeably using shorter and longer commit IDs.** Now that we have two commits, let's visualize them: $ git log --oneline --graph * 7f3b00e (HEAD -> master) adding file 2 * df2fb7a adding file 1 git log , as the name suggests, prints the log of all the git commits. Here you see two additional arguments, --oneline prints the shorter version of the log, ie: the commit message only and not the person who made the commit and when. --graph prints it in graph format. Now at this moment the commits might look like just one in each line but all commits are stored as a tree like data structure internally by git. That means there can be two or more children commits of a given commit. And not just a single line of commits. We will look more into this part when we get to the Branches section. For now this is our commit history: df2fb7a ===> 7f3b00e Are commits really linked? As I just said, the two commits we just made are linked via tree like data structure and we saw how they are linked. But let's actually verify it. Everything in git is an object. Newly created files are stored as an object. Changes to file are stored as an objects and even commits are objects. To view contents of an object we can use the following command with the object's ID. We will take a look at the contents of the second commit $ git cat-file -p 7f3b00e tree ebf3af44d253e5328340026e45a9fa9ae3ea1982 parent df2fb7a61f5d40c1191e0fdeb0fc5d6e7969685a author Sanket Patel 1603273316 -0700 committer Sanket Patel 1603273316 -0700 adding file 2 Take a note of parent attribute in the above output. It points to the commit id of the first commit we made. So this proves that they are linked! Additionally you can see the second commit's message in this object. As I said all this magic is enabled by .git folder and the object to which we are looking at also is in that folder. $ ls .git/objects/7f/3b00eaa957815884198e2fdfec29361108d6a9 .git/objects/7f/3b00eaa957815884198e2fdfec29361108d6a9 It is stored in .git/objects/ folder. All the files and changes to them as well are stored in this folder. The Version Control part of Git We already can see two commits (versions) in our git log. One thing a version control tool gives you is ability to browse back and forth in history. For example: some of your users are running an old version of code and they are reporting an issue. In order to debug the issue, you need access to the old code. The one in your current repo is the latest code. In this example, you are working on the second commit (7f3b00e) and someone reported an issue with the code snapshot at commit (df2fb7a). This is how you would get access to the code at any older commit # Current contents, two files present $ ls file1.txt file2.txt # checking out to (an older) commit $ git checkout df2fb7a Note: checking out 'df2fb7a'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b HEAD is now at df2fb7a adding file 1 # checking contents, can verify it has old contents $ ls file1.txt So this is how we would get access to old versions/snapshots. All we need is a reference to that snapshot. Upon executing git checkout ... , what git does for you is use the .git folder, see what was the state of things (files and folders) at that version/reference and replace the contents of current directory with those contents. The then-existing content will no longer be present in the local dir (repo) but we can and will still get access to them because they are tracked via git commit and .git folder has them stored/tracked. Reference I mention in the previous section that we need a reference to the version. By default, git repo is made of tree of commits. And each commit has a unique IDs. But the unique ID is not the only thing we can reference commits via. There are multiple ways to reference commits. For example: HEAD is a reference to current commit. Whatever commit your repo is checked out at, HEAD will point to that. HEAD~1 is reference to previous commit. So while checking out previous version in section above, we could have done git checkout HEAD~1 . Similarly, master is also a reference (to a branch). Since git uses tree like structure to store commits, there of course will be branches. And the default branch is called master . Master (or any branch reference) will point to the latest commit in the branch. Even though we have checked out to the previous commit in out repo, master still points to the latest commit. And we can get back to the latest version by checkout at master reference $ git checkout master Previous HEAD position was df2fb7a adding file 1 Switched to branch 'master' # now we will see latest code, with two files $ ls file1.txt file2.txt Note, instead of master in above command, we could have used commit's ID as well. References and The Magic Let's look at the state of things. Two commits, master and HEAD references are pointing to the latest commit $ git log --oneline --graph * 7f3b00e (HEAD -> master) adding file 2 * df2fb7a adding file 1 The magic? Let's examine these files: $ cat .git/refs/heads/master 7f3b00eaa957815884198e2fdfec29361108d6a9 Viola! Where master is pointing to is stored in a file. Whenever git needs to know where master reference is pointing to, or if git needs to update where master points, it just needs to update the file above. So when you create a new commit, a new commit is created on top of the current commit and the master file is updated with the new commit's ID. Similary, for HEAD reference: $ cat .git/HEAD ref: refs/heads/master We can see HEAD is pointing to a reference called refs/heads/master . So HEAD will point where ever the master points. Little Adventure We discussed how git will update the files as we execute commands. But let's try to do it ourselves, by hand, and see what happens. $ git log --oneline --graph * 7f3b00e (HEAD -> master) adding file 2 * df2fb7a adding file 1 Now let's change master to point to the previous/first commit. $ echo df2fb7a61f5d40c1191e0fdeb0fc5d6e7969685a > .git/refs/heads/master $ git log --oneline --graph * df2fb7a (HEAD -> master) adding file 1 # RESETTING TO ORIGINAL $ echo 7f3b00eaa957815884198e2fdfec29361108d6a9 > .git/refs/heads/master $ git log --oneline --graph * 7f3b00e (HEAD -> master) adding file 2 * df2fb7a adding file 1 We just edited the master reference file and now we can see only the first commit in git log. Undoing the change to the file brings the state back to original. Not so much of magic, is it?","title":"Git Basics"},{"location":"level101/git/git-basics/#git","text":"","title":"Git"},{"location":"level101/git/git-basics/#prerequisites","text":"Have Git installed https://git-scm.com/downloads Have taken any git high level tutorial or following LinkedIn learning courses https://www.linkedin.com/learning/git-essential-training-the-basics/ https://www.linkedin.com/learning/git-branches-merges-and-remotes/ The Official Git Docs","title":"Prerequisites"},{"location":"level101/git/git-basics/#what-to-expect-from-this-course","text":"As an engineer in the field of computer science, having knowledge of version control tools becomes almost a requirement. While there are a lot of version control tools that exist today like SVN, Mercurial, etc, Git perhaps is the most used one and this course we will be working with Git. While this course does not start with Git 101 and expects basic knowledge of git as a prerequisite, it will reintroduce the git concepts known by you with details covering what is happening under the hood as you execute various git commands. So that next time you run a git command, you will be able to press enter more confidently!","title":"What to expect from this course"},{"location":"level101/git/git-basics/#what-is-not-covered-under-this-course","text":"Advanced usage and specifics of internal implementation details of Git.","title":"What is not covered under this course"},{"location":"level101/git/git-basics/#course-contents","text":"Git Basics Working with Branches Git with Github Hooks","title":"Course Contents"},{"location":"level101/git/git-basics/#git-basics","text":"Though you might be aware already, let's revisit why we need a version control system. As the project grows and multiple developers start working on it, an efficient method for collaboration is warranted. Git helps the team collaborate easily and also maintains the history of the changes happening with the codebase.","title":"Git Basics"},{"location":"level101/git/git-basics/#creating-a-git-repo","text":"Any folder can be converted into a git repository. After executing the following command, we will see a .git folder within the folder, which makes our folder a git repository. All the magic that git does, .git folder is the enabler for the same. # creating an empty folder and changing current dir to it $ cd /tmp $ mkdir school-of-sre $ cd school-of-sre/ # initialize a git repo $ git init Initialized empty Git repository in /private/tmp/school-of-sre/.git/ As the output says, an empty git repo has been initialized in our folder. Let's take a look at what is there. $ ls .git/ HEAD config description hooks info objects refs There are a bunch of folders and files in the .git folder. As I said, all these enables git to do its magic. We will look into some of these folders and files. But for now, what we have is an empty git repository.","title":"Creating a Git Repo"},{"location":"level101/git/git-basics/#tracking-a-file","text":"Now as you might already know, let us create a new file in our repo (we will refer to the folder as repo now.) And see git status $ echo \"I am file 1\" > file1.txt $ git status On branch master No commits yet Untracked files: (use \"git add ...\" to include in what will be committed) file1.txt nothing added to commit but untracked files present (use \"git add\" to track) The current git status says No commits yet and there is one untracked file. Since we just created the file, git is not tracking that file. We explicitly need to ask git to track files and folders. (also checkout gitignore ) And how we do that is via git add command as suggested in the above output. Then we go ahead and create a commit. $ git add file1.txt $ git status On branch master No commits yet Changes to be committed: (use \"git rm --cached ...\" to unstage) new file: file1.txt $ git commit -m \"adding file 1\" [master (root-commit) df2fb7a] adding file 1 1 file changed, 1 insertion(+) create mode 100644 file1.txt Notice how after adding the file, git status says Changes to be committed: . What it means is whatever is listed there, will be included in the next commit. Then we go ahead and create a commit, with an attached messaged via -m .","title":"Tracking a File"},{"location":"level101/git/git-basics/#more-about-a-commit","text":"Commit is a snapshot of the repo. Whenever a commit is made, a snapshot of the current state of repo (the folder) is taken and saved. Each commit has a unique ID. ( df2fb7a for the commit we made in the previous step). As we keep adding/changing more and more contents and keep making commits, all those snapshots are stored by git. Again, all this magic happens inside the .git folder. This is where all this snapshot or versions are stored in an efficient manner.","title":"More About a Commit"},{"location":"level101/git/git-basics/#adding-more-changes","text":"Let us create one more file and commit the change. It would look the same as the previous commit we made. $ echo \"I am file 2\" > file2.txt $ git add file2.txt $ git commit -m \"adding file 2\" [master 7f3b00e] adding file 2 1 file changed, 1 insertion(+) create mode 100644 file2.txt A new commit with ID 7f3b00e has been created. You can issue git status at any time to see the state of the repository. **IMPORTANT: Note that commit IDs are long string (SHA) but we can refer to a commit by its initial few (8 or more) characters too. We will interchangeably using shorter and longer commit IDs.** Now that we have two commits, let's visualize them: $ git log --oneline --graph * 7f3b00e (HEAD -> master) adding file 2 * df2fb7a adding file 1 git log , as the name suggests, prints the log of all the git commits. Here you see two additional arguments, --oneline prints the shorter version of the log, ie: the commit message only and not the person who made the commit and when. --graph prints it in graph format. Now at this moment the commits might look like just one in each line but all commits are stored as a tree like data structure internally by git. That means there can be two or more children commits of a given commit. And not just a single line of commits. We will look more into this part when we get to the Branches section. For now this is our commit history: df2fb7a ===> 7f3b00e","title":"Adding More Changes"},{"location":"level101/git/git-basics/#are-commits-really-linked","text":"As I just said, the two commits we just made are linked via tree like data structure and we saw how they are linked. But let's actually verify it. Everything in git is an object. Newly created files are stored as an object. Changes to file are stored as an objects and even commits are objects. To view contents of an object we can use the following command with the object's ID. We will take a look at the contents of the second commit $ git cat-file -p 7f3b00e tree ebf3af44d253e5328340026e45a9fa9ae3ea1982 parent df2fb7a61f5d40c1191e0fdeb0fc5d6e7969685a author Sanket Patel 1603273316 -0700 committer Sanket Patel 1603273316 -0700 adding file 2 Take a note of parent attribute in the above output. It points to the commit id of the first commit we made. So this proves that they are linked! Additionally you can see the second commit's message in this object. As I said all this magic is enabled by .git folder and the object to which we are looking at also is in that folder. $ ls .git/objects/7f/3b00eaa957815884198e2fdfec29361108d6a9 .git/objects/7f/3b00eaa957815884198e2fdfec29361108d6a9 It is stored in .git/objects/ folder. All the files and changes to them as well are stored in this folder.","title":"Are commits really linked?"},{"location":"level101/git/git-basics/#the-version-control-part-of-git","text":"We already can see two commits (versions) in our git log. One thing a version control tool gives you is ability to browse back and forth in history. For example: some of your users are running an old version of code and they are reporting an issue. In order to debug the issue, you need access to the old code. The one in your current repo is the latest code. In this example, you are working on the second commit (7f3b00e) and someone reported an issue with the code snapshot at commit (df2fb7a). This is how you would get access to the code at any older commit # Current contents, two files present $ ls file1.txt file2.txt # checking out to (an older) commit $ git checkout df2fb7a Note: checking out 'df2fb7a'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b HEAD is now at df2fb7a adding file 1 # checking contents, can verify it has old contents $ ls file1.txt So this is how we would get access to old versions/snapshots. All we need is a reference to that snapshot. Upon executing git checkout ... , what git does for you is use the .git folder, see what was the state of things (files and folders) at that version/reference and replace the contents of current directory with those contents. The then-existing content will no longer be present in the local dir (repo) but we can and will still get access to them because they are tracked via git commit and .git folder has them stored/tracked.","title":"The Version Control part of Git"},{"location":"level101/git/git-basics/#reference","text":"I mention in the previous section that we need a reference to the version. By default, git repo is made of tree of commits. And each commit has a unique IDs. But the unique ID is not the only thing we can reference commits via. There are multiple ways to reference commits. For example: HEAD is a reference to current commit. Whatever commit your repo is checked out at, HEAD will point to that. HEAD~1 is reference to previous commit. So while checking out previous version in section above, we could have done git checkout HEAD~1 . Similarly, master is also a reference (to a branch). Since git uses tree like structure to store commits, there of course will be branches. And the default branch is called master . Master (or any branch reference) will point to the latest commit in the branch. Even though we have checked out to the previous commit in out repo, master still points to the latest commit. And we can get back to the latest version by checkout at master reference $ git checkout master Previous HEAD position was df2fb7a adding file 1 Switched to branch 'master' # now we will see latest code, with two files $ ls file1.txt file2.txt Note, instead of master in above command, we could have used commit's ID as well.","title":"Reference"},{"location":"level101/git/git-basics/#references-and-the-magic","text":"Let's look at the state of things. Two commits, master and HEAD references are pointing to the latest commit $ git log --oneline --graph * 7f3b00e (HEAD -> master) adding file 2 * df2fb7a adding file 1 The magic? Let's examine these files: $ cat .git/refs/heads/master 7f3b00eaa957815884198e2fdfec29361108d6a9 Viola! Where master is pointing to is stored in a file. Whenever git needs to know where master reference is pointing to, or if git needs to update where master points, it just needs to update the file above. So when you create a new commit, a new commit is created on top of the current commit and the master file is updated with the new commit's ID. Similary, for HEAD reference: $ cat .git/HEAD ref: refs/heads/master We can see HEAD is pointing to a reference called refs/heads/master . So HEAD will point where ever the master points.","title":"References and The Magic"},{"location":"level101/git/git-basics/#little-adventure","text":"We discussed how git will update the files as we execute commands. But let's try to do it ourselves, by hand, and see what happens. $ git log --oneline --graph * 7f3b00e (HEAD -> master) adding file 2 * df2fb7a adding file 1 Now let's change master to point to the previous/first commit. $ echo df2fb7a61f5d40c1191e0fdeb0fc5d6e7969685a > .git/refs/heads/master $ git log --oneline --graph * df2fb7a (HEAD -> master) adding file 1 # RESETTING TO ORIGINAL $ echo 7f3b00eaa957815884198e2fdfec29361108d6a9 > .git/refs/heads/master $ git log --oneline --graph * 7f3b00e (HEAD -> master) adding file 2 * df2fb7a adding file 1 We just edited the master reference file and now we can see only the first commit in git log. Undoing the change to the file brings the state back to original. Not so much of magic, is it?","title":"Little Adventure"},{"location":"level101/git/github-hooks/","text":"Git with GitHub Till now all the operations we did were in our local repo while git also helps us in a collaborative environment. GitHub is one place on the internet where you can centrally host your git repos and collaborate with other developers. Most of the workflow will remain the same as we discussed, with addition of couple of things: Pull: to pull latest changes from github (the central) repo Push: to push your changes to github repo so that it's available to all people GitHub has written nice guides and tutorials about this and you can refer them here: GitHub Hello World Git Handbook Hooks Git has another nice feature called hooks. Hooks are basically scripts which will be called when a certain event happens. Here is where hooks are located: $ ls .git/hooks/ applypatch-msg.sample fsmonitor-watchman.sample pre-applypatch.sample pre-push.sample pre-receive.sample update.sample commit-msg.sample post-update.sample pre-commit.sample pre-rebase.sample prepare-commit-msg.sample Names are self explanatory. These hooks are useful when you want to do certain things when a certain event happens. If you want to run tests before pushing code, you would want to setup pre-push hooks. Let's try to create a pre commit hook. $ echo \"echo this is from pre commit hook\" > .git/hooks/pre-commit $ chmod +x .git/hooks/pre-commit We basically create a file called pre-commit in hooks folder and make it executable. Now if we make a commit, we should see the message getting printed. $ echo \"sample file\" > sample.txt $ git add sample.txt $ git commit -m \"adding sample file\" this is from pre commit hook # <===== THE MESSAGE FROM HOOK EXECUTION [master 9894e05] adding sample file 1 file changed, 1 insertion(+) create mode 100644 sample.txt","title":"Github and Hooks"},{"location":"level101/git/github-hooks/#git-with-github","text":"Till now all the operations we did were in our local repo while git also helps us in a collaborative environment. GitHub is one place on the internet where you can centrally host your git repos and collaborate with other developers. Most of the workflow will remain the same as we discussed, with addition of couple of things: Pull: to pull latest changes from github (the central) repo Push: to push your changes to github repo so that it's available to all people GitHub has written nice guides and tutorials about this and you can refer them here: GitHub Hello World Git Handbook","title":"Git with GitHub"},{"location":"level101/git/github-hooks/#hooks","text":"Git has another nice feature called hooks. Hooks are basically scripts which will be called when a certain event happens. Here is where hooks are located: $ ls .git/hooks/ applypatch-msg.sample fsmonitor-watchman.sample pre-applypatch.sample pre-push.sample pre-receive.sample update.sample commit-msg.sample post-update.sample pre-commit.sample pre-rebase.sample prepare-commit-msg.sample Names are self explanatory. These hooks are useful when you want to do certain things when a certain event happens. If you want to run tests before pushing code, you would want to setup pre-push hooks. Let's try to create a pre commit hook. $ echo \"echo this is from pre commit hook\" > .git/hooks/pre-commit $ chmod +x .git/hooks/pre-commit We basically create a file called pre-commit in hooks folder and make it executable. Now if we make a commit, we should see the message getting printed. $ echo \"sample file\" > sample.txt $ git add sample.txt $ git commit -m \"adding sample file\" this is from pre commit hook # <===== THE MESSAGE FROM HOOK EXECUTION [master 9894e05] adding sample file 1 file changed, 1 insertion(+) create mode 100644 sample.txt","title":"Hooks"},{"location":"level101/linux_basics/command_line_basics/","text":"Command Line Basics Lab Environment Setup One can use an online bash interpreter to run all the commands that are provided as examples in this course. This will also help you in getting a hands-on experience of various linux commands. REPL is one of the popular online bash interpreters for running linux commands. We will be using it for running all the commands mentioned in this course. What is a Command A command is a program that tells the operating system to perform specific work. Programs are stored as files in linux. Therefore, a command is also a file which is stored somewhere on the disk. Commands may also take additional arguments as input from the user. These arguments are called command line arguments. Knowing how to use the commands is important and there are many ways to get help in Linux, especially for commands. Almost every command will have some form of documentation, most commands will have a command-line argument -h or --help that will display a reasonable amount of documentation. But the most popular documentation system in Linux is called man pages - short for manual pages. Using --help to show the documentation for ls command. File System Organization The linux file system has a hierarchical (or tree-like) structure with its highest level directory called root ( denoted by / ). Directories present inside the root directory stores file related to the system. These directories in turn can either store system files or application files or user related files. bin | The executable program of most commonly used commands reside in bin directory dev | This directory contains files related to devices on the system etc | This directory contains all the system configuration files home | This directory contains user related files and directories. lib | This directory contains all the library files mnt | This directory contains files related to mounted devices on the system proc | This directory contains files related to the running processes on the system root | This directory contains root user related files and directories. sbin | This directory contains programs used for system administration. tmp | This directory is used to store temporary files on the system usr | This directory is used to store application programs on the system Commands for Navigating the File System There are three basic commands which are used frequently to navigate the file system: ls pwd cd We will now try to understand what each command does and how to use these commands. You should also practice the given examples on the online bash shell. pwd (print working directory) At any given moment of time, we will be standing in a certain directory. To get the name of the directory in which we are standing, we can use the pwd command in linux. We will now use the cd command to move to a different directory and then print the working directory. cd (change directory) The cd command can be used to change the working directory. Using the command, you can move from one directory to another. In the below example, we are initially in the root directory. we have then used the cd command to change the directory. ls (list files and directories)** The ls command is used to list the contents of a directory. It will list down all the files and folders present in the given directory. If we just type ls in the shell, it will list all the files and directories present in the current directory. We can also provide the directory name as argument to ls command. It will then list all the files and directories inside the given directory. Commands for Manipulating Files There are five basic commands which are used frequently to manipulate files: touch mkdir cp mv rm We will now try to understand what each command does and how to use these commands. You should also practice the given examples on the online bash shell. touch (create new file) The touch command can be used to create an empty new file. This command is very useful for many other purposes but we will discuss the simplest use case of creating a new file. General syntax of using touch command touch mkdir (create new directories) The mkdir command is used to create directories.You can use ls command to verify that the new directory is created. General syntax of using mkdir command mkdir rm (delete files and directories) The rm command can be used to delete files and directories. It is very important to note that this command permanently deletes the files and directories. It's almost impossible to recover these files and directories once you have executed rm command on them successfully. Do run this command with care. General syntax of using rm command: rm Let's try to understand the rm command with an example. We will try to delete the file and directory we created using touch and mkdir command respectively. cp (copy files and directories) The cp command is used to copy files and directories from one location to another. Do note that the cp command doesn't do any change to the original files or directories. The original files or directories and their copy both co-exist after running cp command successfully. General syntax of using cp command: cp We are currently in the '/home/runner' directory. We will use the mkdir command to create a new directory named \"test_directory\". We will now try to copy the \"_test_runner.py\" file to the directory we created just now. Do note that nothing happened to the original \"_test_runner.py\" file. It's still there in the current directory. A new copy of it got created inside the \"test_directory\". We can also use the cp command to copy the whole directory from one location to another. Let's try to understand this with an example. We again used the mkdir command to create a new directory called \"another_directory\". We then used the cp command along with an additional argument '-r' to copy the \"test_directory\". mv (move files and directories) The mv command can either be used to move files or directories from one location to another or it can be used to rename files or directories. Do note that moving files and copying them are very different. When you move the files or directories, the original copy is lost. General syntax of using mv command: mv In this example, we will use the mv command to move the \"_test_runner.py\" file to \"test_directory\". In this case, this file already exists in \"test_directory\". The mv command will just replace it. Do note that the original file doesn't exist in the current directory after mv command ran successfully. We can also use the mv command to move a directory from one location to another. In this case, we do not need to use the '-r' flag that we did while using the cp command. Do note that the original directory will not exist if we use mv command. One of the important uses of the mv command is to rename files and directories. Let's see how we can use this command for renaming. We have first changed our location to \"test_directory\". We then use the mv command to rename the \"\"_test_runner.py\" file to \"test.py\". Commands for Viewing Files There are five basic commands which are used frequently to view the files: cat head tail more less We will now try to understand what each command does and how to use these commands. You should also practice the given examples on the online bash shell. We will create a new file called \"numbers.txt\" and insert numbers from 1 to 100 in this file. Each number will be in a separate line. Do not worry about the above command now. It's an advanced command which is used to generate numbers. We have then used a redirection operator to push these numbers to the file. We will be discussing I/O redirection in the later sections. cat The most simplest use of cat command is to print the contents of the file on your output screen. This command is very useful and can be used for many other purposes. We will study about other use cases later. You can try to run the above command and you will see numbers being printed from 1 to 100 on your screen. You will need to scroll up to view all the numbers. head The head command displays the first 10 lines of the file by default. We can include additional arguments to display as many lines as we want from the top. In this example, we are only able to see the first 10 lines from the file when we use the head command. By default, head command will only display the first 10 lines. If we want to specify the number of lines we want to see from start, use the '-n' argument to provide the input. tail The tail command displays the last 10 lines of the file by default. We can include additional arguments to display as many lines as we want from the end of the file. By default, the tail command will only display the last 10 lines. If we want to specify the number of lines we want to see from the end, use '-n' argument to provide the input. In this example, we are only able to see the last 5 lines from the file when we use the tail command with explicit -n option. more More command displays the contents of a file or a command output, displaying one screen at a time in case the file is large (Eg: log files). It also allows forward navigation and limited backward navigation in the file. More command displays as much as can fit on the current screen and waits for user input to advance. Forward navigation can be done by pressing Enter, which advances the output by one line and Space, which advances the output by one screen. less Less command is an improved version of more. It displays the contents of a file or a command output, one page at a time. It allows backward navigation as well as forward navigation in the file and also has search options. We can use arrow keys for advancing backward or forward by one line. For moving forward by one page, press Space and for moving backward by one page, press b on your keyboard. You can go to the beginning and the end of a file instantly. Echo Command in Linux The echo command is one of the simplest commands that is used in the shell. This command is equivalent to what we have in other programming languages. The echo command prints the given input string on the screen. Text Processing Commands In the previous section, we learned how to view the content of a file. In many cases, we will be interested in performing the below operations: Print only the lines which contain a particular word(s) Replace a particular word with another word in a file Sort the lines in a particular order There are three basic commands which are used frequently to process texts: grep sed sort We will now try to understand what each command does and how to use these commands. You should also practice the given examples on the online bash shell. We will create a new file called \"numbers.txt\" and insert numbers from 1 to 10 in this file. Each number will be in a separate line. grep The grep command in its simplest form can be used to search particular words in a text file. It will display all the lines in a file that contains a particular input. The word we want to search is provided as an input to the grep command. General syntax of using grep command: grep In this example, we are trying to search for a string \"1\" in this file. The grep command outputs the lines where it found this string. sed The sed command in its simplest form can be used to replace a text in a file. General syntax of using the sed command for replacement: sed 's///' Let's try to replace each occurrence of \"1\" in the file with \"3\" using sed command. The content of the file will not change in the above example. To do so, we have to use an extra argument '-i' so that the changes are reflected back in the file. sort The sort command can be used to sort the input provided to it as an argument. By default, it will sort in increasing order. Let's first see the content of the file before trying to sort it. Now, we will try to sort the file using the sort command. The sort command sorts the content in lexicographical order. The content of the file will not change in the above example. I/O Redirection Each open file gets assigned a file descriptor. A file descriptor is an unique identifier for open files in the system. There are always three default files open, stdin (the keyboard), stdout (the screen), and stderr (error messages output to the screen). These files can be redirected. Everything is a file in linux - https://unix.stackexchange.com/questions/225537/everything-is-a-file Till now, we have displayed all the output on the screen which is the standard output. We can use some special operators to redirect the output of the command to files or even to the input of other commands. I/O redirection is a very powerful feature. In the below example, we have used the '>' operator to redirect the output of ls command to output.txt file. In the below example, we have redirected the output from echo command to a file. We can also redirect the output of a command as an input to another command. This is possible with the help of pipes. In the below example, we have passed the output of cat command as an input to grep command using pipe(|) operator. In the below example, we have passed the output of sort command as an input to uniq command using pipe(|) operator. The uniq command only prints the unique numbers from the input. I/O redirection - https://tldp.org/LDP/abs/html/io-redirection.html","title":"Command Line Basics"},{"location":"level101/linux_basics/command_line_basics/#command-line-basics","text":"","title":"Command Line Basics"},{"location":"level101/linux_basics/command_line_basics/#lab-environment-setup","text":"One can use an online bash interpreter to run all the commands that are provided as examples in this course. This will also help you in getting a hands-on experience of various linux commands. REPL is one of the popular online bash interpreters for running linux commands. We will be using it for running all the commands mentioned in this course.","title":"Lab Environment Setup"},{"location":"level101/linux_basics/command_line_basics/#what-is-a-command","text":"A command is a program that tells the operating system to perform specific work. Programs are stored as files in linux. Therefore, a command is also a file which is stored somewhere on the disk. Commands may also take additional arguments as input from the user. These arguments are called command line arguments. Knowing how to use the commands is important and there are many ways to get help in Linux, especially for commands. Almost every command will have some form of documentation, most commands will have a command-line argument -h or --help that will display a reasonable amount of documentation. But the most popular documentation system in Linux is called man pages - short for manual pages. Using --help to show the documentation for ls command.","title":"What is a Command"},{"location":"level101/linux_basics/command_line_basics/#file-system-organization","text":"The linux file system has a hierarchical (or tree-like) structure with its highest level directory called root ( denoted by / ). Directories present inside the root directory stores file related to the system. These directories in turn can either store system files or application files or user related files. bin | The executable program of most commonly used commands reside in bin directory dev | This directory contains files related to devices on the system etc | This directory contains all the system configuration files home | This directory contains user related files and directories. lib | This directory contains all the library files mnt | This directory contains files related to mounted devices on the system proc | This directory contains files related to the running processes on the system root | This directory contains root user related files and directories. sbin | This directory contains programs used for system administration. tmp | This directory is used to store temporary files on the system usr | This directory is used to store application programs on the system","title":"File System Organization"},{"location":"level101/linux_basics/command_line_basics/#commands-for-navigating-the-file-system","text":"There are three basic commands which are used frequently to navigate the file system: ls pwd cd We will now try to understand what each command does and how to use these commands. You should also practice the given examples on the online bash shell.","title":"Commands for Navigating the File System"},{"location":"level101/linux_basics/command_line_basics/#pwd-print-working-directory","text":"At any given moment of time, we will be standing in a certain directory. To get the name of the directory in which we are standing, we can use the pwd command in linux. We will now use the cd command to move to a different directory and then print the working directory.","title":"pwd (print working directory)"},{"location":"level101/linux_basics/command_line_basics/#cd-change-directory","text":"The cd command can be used to change the working directory. Using the command, you can move from one directory to another. In the below example, we are initially in the root directory. we have then used the cd command to change the directory.","title":"cd (change directory)"},{"location":"level101/linux_basics/command_line_basics/#ls-list-files-and-directories","text":"The ls command is used to list the contents of a directory. It will list down all the files and folders present in the given directory. If we just type ls in the shell, it will list all the files and directories present in the current directory. We can also provide the directory name as argument to ls command. It will then list all the files and directories inside the given directory.","title":"ls (list files and directories)**"},{"location":"level101/linux_basics/command_line_basics/#commands-for-manipulating-files","text":"There are five basic commands which are used frequently to manipulate files: touch mkdir cp mv rm We will now try to understand what each command does and how to use these commands. You should also practice the given examples on the online bash shell.","title":"Commands for Manipulating Files"},{"location":"level101/linux_basics/command_line_basics/#touch-create-new-file","text":"The touch command can be used to create an empty new file. This command is very useful for many other purposes but we will discuss the simplest use case of creating a new file. General syntax of using touch command touch ","title":"touch (create new file)"},{"location":"level101/linux_basics/command_line_basics/#mkdir-create-new-directories","text":"The mkdir command is used to create directories.You can use ls command to verify that the new directory is created. General syntax of using mkdir command mkdir ","title":"mkdir (create new directories)"},{"location":"level101/linux_basics/command_line_basics/#rm-delete-files-and-directories","text":"The rm command can be used to delete files and directories. It is very important to note that this command permanently deletes the files and directories. It's almost impossible to recover these files and directories once you have executed rm command on them successfully. Do run this command with care. General syntax of using rm command: rm Let's try to understand the rm command with an example. We will try to delete the file and directory we created using touch and mkdir command respectively.","title":"rm (delete files and directories)"},{"location":"level101/linux_basics/command_line_basics/#cp-copy-files-and-directories","text":"The cp command is used to copy files and directories from one location to another. Do note that the cp command doesn't do any change to the original files or directories. The original files or directories and their copy both co-exist after running cp command successfully. General syntax of using cp command: cp We are currently in the '/home/runner' directory. We will use the mkdir command to create a new directory named \"test_directory\". We will now try to copy the \"_test_runner.py\" file to the directory we created just now. Do note that nothing happened to the original \"_test_runner.py\" file. It's still there in the current directory. A new copy of it got created inside the \"test_directory\". We can also use the cp command to copy the whole directory from one location to another. Let's try to understand this with an example. We again used the mkdir command to create a new directory called \"another_directory\". We then used the cp command along with an additional argument '-r' to copy the \"test_directory\". mv (move files and directories) The mv command can either be used to move files or directories from one location to another or it can be used to rename files or directories. Do note that moving files and copying them are very different. When you move the files or directories, the original copy is lost. General syntax of using mv command: mv In this example, we will use the mv command to move the \"_test_runner.py\" file to \"test_directory\". In this case, this file already exists in \"test_directory\". The mv command will just replace it. Do note that the original file doesn't exist in the current directory after mv command ran successfully. We can also use the mv command to move a directory from one location to another. In this case, we do not need to use the '-r' flag that we did while using the cp command. Do note that the original directory will not exist if we use mv command. One of the important uses of the mv command is to rename files and directories. Let's see how we can use this command for renaming. We have first changed our location to \"test_directory\". We then use the mv command to rename the \"\"_test_runner.py\" file to \"test.py\".","title":"cp (copy files and directories)"},{"location":"level101/linux_basics/command_line_basics/#commands-for-viewing-files","text":"There are five basic commands which are used frequently to view the files: cat head tail more less We will now try to understand what each command does and how to use these commands. You should also practice the given examples on the online bash shell. We will create a new file called \"numbers.txt\" and insert numbers from 1 to 100 in this file. Each number will be in a separate line. Do not worry about the above command now. It's an advanced command which is used to generate numbers. We have then used a redirection operator to push these numbers to the file. We will be discussing I/O redirection in the later sections.","title":"Commands for Viewing Files"},{"location":"level101/linux_basics/command_line_basics/#cat","text":"The most simplest use of cat command is to print the contents of the file on your output screen. This command is very useful and can be used for many other purposes. We will study about other use cases later. You can try to run the above command and you will see numbers being printed from 1 to 100 on your screen. You will need to scroll up to view all the numbers.","title":"cat"},{"location":"level101/linux_basics/command_line_basics/#head","text":"The head command displays the first 10 lines of the file by default. We can include additional arguments to display as many lines as we want from the top. In this example, we are only able to see the first 10 lines from the file when we use the head command. By default, head command will only display the first 10 lines. If we want to specify the number of lines we want to see from start, use the '-n' argument to provide the input.","title":"head"},{"location":"level101/linux_basics/command_line_basics/#tail","text":"The tail command displays the last 10 lines of the file by default. We can include additional arguments to display as many lines as we want from the end of the file. By default, the tail command will only display the last 10 lines. If we want to specify the number of lines we want to see from the end, use '-n' argument to provide the input. In this example, we are only able to see the last 5 lines from the file when we use the tail command with explicit -n option.","title":"tail"},{"location":"level101/linux_basics/command_line_basics/#more","text":"More command displays the contents of a file or a command output, displaying one screen at a time in case the file is large (Eg: log files). It also allows forward navigation and limited backward navigation in the file. More command displays as much as can fit on the current screen and waits for user input to advance. Forward navigation can be done by pressing Enter, which advances the output by one line and Space, which advances the output by one screen.","title":"more"},{"location":"level101/linux_basics/command_line_basics/#less","text":"Less command is an improved version of more. It displays the contents of a file or a command output, one page at a time. It allows backward navigation as well as forward navigation in the file and also has search options. We can use arrow keys for advancing backward or forward by one line. For moving forward by one page, press Space and for moving backward by one page, press b on your keyboard. You can go to the beginning and the end of a file instantly.","title":"less"},{"location":"level101/linux_basics/command_line_basics/#echo-command-in-linux","text":"The echo command is one of the simplest commands that is used in the shell. This command is equivalent to what we have in other programming languages. The echo command prints the given input string on the screen.","title":"Echo Command in Linux"},{"location":"level101/linux_basics/command_line_basics/#text-processing-commands","text":"In the previous section, we learned how to view the content of a file. In many cases, we will be interested in performing the below operations: Print only the lines which contain a particular word(s) Replace a particular word with another word in a file Sort the lines in a particular order There are three basic commands which are used frequently to process texts: grep sed sort We will now try to understand what each command does and how to use these commands. You should also practice the given examples on the online bash shell. We will create a new file called \"numbers.txt\" and insert numbers from 1 to 10 in this file. Each number will be in a separate line.","title":"Text Processing Commands"},{"location":"level101/linux_basics/command_line_basics/#grep","text":"The grep command in its simplest form can be used to search particular words in a text file. It will display all the lines in a file that contains a particular input. The word we want to search is provided as an input to the grep command. General syntax of using grep command: grep In this example, we are trying to search for a string \"1\" in this file. The grep command outputs the lines where it found this string.","title":"grep"},{"location":"level101/linux_basics/command_line_basics/#sed","text":"The sed command in its simplest form can be used to replace a text in a file. General syntax of using the sed command for replacement: sed 's///' Let's try to replace each occurrence of \"1\" in the file with \"3\" using sed command. The content of the file will not change in the above example. To do so, we have to use an extra argument '-i' so that the changes are reflected back in the file.","title":"sed"},{"location":"level101/linux_basics/command_line_basics/#sort","text":"The sort command can be used to sort the input provided to it as an argument. By default, it will sort in increasing order. Let's first see the content of the file before trying to sort it. Now, we will try to sort the file using the sort command. The sort command sorts the content in lexicographical order. The content of the file will not change in the above example.","title":"sort"},{"location":"level101/linux_basics/command_line_basics/#io-redirection","text":"Each open file gets assigned a file descriptor. A file descriptor is an unique identifier for open files in the system. There are always three default files open, stdin (the keyboard), stdout (the screen), and stderr (error messages output to the screen). These files can be redirected. Everything is a file in linux - https://unix.stackexchange.com/questions/225537/everything-is-a-file Till now, we have displayed all the output on the screen which is the standard output. We can use some special operators to redirect the output of the command to files or even to the input of other commands. I/O redirection is a very powerful feature. In the below example, we have used the '>' operator to redirect the output of ls command to output.txt file. In the below example, we have redirected the output from echo command to a file. We can also redirect the output of a command as an input to another command. This is possible with the help of pipes. In the below example, we have passed the output of cat command as an input to grep command using pipe(|) operator. In the below example, we have passed the output of sort command as an input to uniq command using pipe(|) operator. The uniq command only prints the unique numbers from the input. I/O redirection - https://tldp.org/LDP/abs/html/io-redirection.html","title":"I/O Redirection"},{"location":"level101/linux_basics/conclusion/","text":"Conclusion We have covered the basics of Linux operating systems and basic commands used in linux. We have also covered the Linux server administration commands. We hope that this course will make it easier for you to operate on the command line. Applications in SRE Role As a SRE, you will be required to perform some general tasks on these Linux servers. You will also be using the command line when you are troubleshooting issues. Moving from one location to another in the filesystem will require the help of ls , pwd and cd commands. You may need to search some specific information in the log files. grep command would be very useful here. I/O redirection will become handy if you want to store the output in a file or pass it as an input to another command. tail command is very useful to view the latest data in the log file. Different users will have different permissions depending on their roles. We will also not want everyone in the company to access our servers for security reasons. Users permissions can be restricted with chown , chmod and chgrp commands. ssh is one of the most frequently used commands for a SRE. Logging into servers and troubleshooting along with performing basic administration tasks will only be possible if we are able to login into the server. What if we want to run an apache server or nginx on a server? We will first install it using the package manager. Package management commands become important here. Managing services on servers is another critical responsibility of a SRE. Systemd related commands can help in troubleshooting issues. If a service goes down, we can start it using systemctl start command. We can also stop a service in case it is not needed. Monitoring is another core responsibility of a SRE. Memory and CPU are two important system level metrics which should be monitored. Commands like top and free are quite helpful here. If a service is throwing an error, how do we find out the root cause of the error ? We will certainly need to check logs to find out the whole stack trace of the error. The log file will also tell us the number of times the error has occurred along with time when it started. Useful Courses and tutorials Edx basic linux commands course Edx Red Hat Enterprise Linux Course https://linuxcommand.org/lc3_learning_the_shell.php","title":"Conclusion"},{"location":"level101/linux_basics/conclusion/#conclusion","text":"We have covered the basics of Linux operating systems and basic commands used in linux. We have also covered the Linux server administration commands. We hope that this course will make it easier for you to operate on the command line.","title":"Conclusion"},{"location":"level101/linux_basics/conclusion/#applications-in-sre-role","text":"As a SRE, you will be required to perform some general tasks on these Linux servers. You will also be using the command line when you are troubleshooting issues. Moving from one location to another in the filesystem will require the help of ls , pwd and cd commands. You may need to search some specific information in the log files. grep command would be very useful here. I/O redirection will become handy if you want to store the output in a file or pass it as an input to another command. tail command is very useful to view the latest data in the log file. Different users will have different permissions depending on their roles. We will also not want everyone in the company to access our servers for security reasons. Users permissions can be restricted with chown , chmod and chgrp commands. ssh is one of the most frequently used commands for a SRE. Logging into servers and troubleshooting along with performing basic administration tasks will only be possible if we are able to login into the server. What if we want to run an apache server or nginx on a server? We will first install it using the package manager. Package management commands become important here. Managing services on servers is another critical responsibility of a SRE. Systemd related commands can help in troubleshooting issues. If a service goes down, we can start it using systemctl start command. We can also stop a service in case it is not needed. Monitoring is another core responsibility of a SRE. Memory and CPU are two important system level metrics which should be monitored. Commands like top and free are quite helpful here. If a service is throwing an error, how do we find out the root cause of the error ? We will certainly need to check logs to find out the whole stack trace of the error. The log file will also tell us the number of times the error has occurred along with time when it started.","title":"Applications in SRE Role"},{"location":"level101/linux_basics/conclusion/#useful-courses-and-tutorials","text":"Edx basic linux commands course Edx Red Hat Enterprise Linux Course https://linuxcommand.org/lc3_learning_the_shell.php","title":"Useful Courses and tutorials"},{"location":"level101/linux_basics/intro/","text":"Linux Basics Introduction Prerequisites Should be comfortable in using any operating systems like Windows, Linux or Mac Expected to have fundamental knowledge of operating systems What to expect from this course This course is divided into three parts. In the first part, we cover the fundamentals of Linux operating systems. We will talk about Linux architecture, Linux distributions and uses of Linux operating systems. We will also talk about the difference between GUI and CLI. In the second part, we cover some basic commands used in Linux. We will focus on commands used for navigating the file system, viewing and manipulating files, I/O redirection etc. In the third part, we cover Linux system administration. This includes day to day tasks performed by Linux admins, like managing users/groups, managing file permissions, monitoring system performance, log files etc. In the second and third part, we will be taking examples to understand the concepts. What is not covered under this course We are not covering advanced Linux commands and bash scripting in this course. We will also not be covering Linux internals. Course Contents The following topics has been covered in this course: Introduction to Linux What are Linux Operating Systems What are popular Linux distributions Uses of Linux Operating Systems Linux Architecture Graphical user interface (GUI) vs Command line interface (CLI) Command Line Basics Lab Environment Setup What is a Command File System Organization Navigating File System Manipulating Files Viewing Files Echo Command Text Processing Commands I/O Redirection Linux system administration Lab Environment Setup User/Groups management Becoming a Superuser File Permissions SSH Command Package Management Process Management Memory Management Daemons and Systemd Logs Conclusion Applications in SRE Role Useful Courses and tutorials What are Linux operating systems Most of us are familiar with the Windows operating system used in more than 75% of the personal computers. The Windows operating systems are based on Windows NT kernel. A kernel is the most important part of an operating system - it performs important functions like process management, memory management, filesystem management etc. Linux operating systems are based on the Linux kernel. A Linux based operating system will consist of Linux kernel, GUI/CLI, system libraries and system utilities. The Linux kernel was independently developed and released by Linus Torvalds. The Linux kernel is free and open-source - https://github.com/torvalds/linux Linux is a kernel and not a complete operating system. Linux kernel is combined with GNU system to make a complete operating system. Therefore, linux based operating systems are also called as GNU/Linux systems. GNU is an extensive collection of free softwares like compiler, debugger, C library etc. Linux and the GNU System History of Linux - https://en.wikipedia.org/wiki/History_of_Linux What are popular Linux distributions A Linux distribution(distro) is an operating system based on the Linux kernel and a package management system. A package management system consists of tools that help in installing, upgrading, configuring and removing softwares on the operating system. Software are usually adopted to a distribution and are packaged in a distro specific format. These packages are available through a distro specific repository. Packages are installed and managed in the operating system by a package manager. List of popular Linux distributions: Fedora Ubuntu Debian Centos Red Hat Enterprise Linux Suse Arch Linux Packaging systems Distributions Package manager Debian style (.deb) Debian, Ubuntu APT Red Hat style (.rpm) Fedora, CentOS, Red Hat Enterprise Linux YUM Linux Architecture The Linux kernel is monolithic in nature. System calls are used to interact with the Linux kernel space. Kernel code can only be executed in the kernel mode. Non-kernel code is executed in the user mode. Device drivers are used to communicate with the hardware devices. Uses of Linux Operating Systems Operating system based on Linux kernel are widely used in: Personal computers Servers Mobile phones - Android is based on Linux operating system Embedded devices - watches, televisions, traffic lights etc Satellites Network devices - routers, switches etc. Graphical user interface (GUI) vs Command line interface (CLI) A user interacts with a computer with the help of user interfaces. The user interface can be either GUI or CLI. Graphical user interface allows a user to interact with the computer using graphics such as icons and images. When a user clicks on an icon to open an application on a computer, he or she is actually using the GUI. It's easy to perform tasks using GUI. Command line interface allows a user to interact with the computer using commands. A user types the command in a terminal and the system helps in executing these commands. A new user with experience on GUI may find it difficult to interact with CLI as he/she needs to be aware of the commands to perform a particular operation. Shell vs Terminal Shell is a program that takes commands from the users and gives them to the operating system for processing. Shell is an example of a CLI (command line interface). Bash is one of the most popular shell programs available on Linux servers. Other popular shell programs are zsh, ksh and tcsh. Terminal is a program that opens a window and lets you interact with the shell. Some popular examples of terminals are gnome-terminal, xterm, konsole etc. Linux users do use the terms shell, terminal, prompt, console etc. interchangeably. In simple terms, these all refer to a way of taking commands from the user.","title":"Introduction"},{"location":"level101/linux_basics/intro/#linux-basics","text":"","title":"Linux Basics"},{"location":"level101/linux_basics/intro/#introduction","text":"","title":"Introduction"},{"location":"level101/linux_basics/intro/#prerequisites","text":"Should be comfortable in using any operating systems like Windows, Linux or Mac Expected to have fundamental knowledge of operating systems","title":"Prerequisites"},{"location":"level101/linux_basics/intro/#what-to-expect-from-this-course","text":"This course is divided into three parts. In the first part, we cover the fundamentals of Linux operating systems. We will talk about Linux architecture, Linux distributions and uses of Linux operating systems. We will also talk about the difference between GUI and CLI. In the second part, we cover some basic commands used in Linux. We will focus on commands used for navigating the file system, viewing and manipulating files, I/O redirection etc. In the third part, we cover Linux system administration. This includes day to day tasks performed by Linux admins, like managing users/groups, managing file permissions, monitoring system performance, log files etc. In the second and third part, we will be taking examples to understand the concepts.","title":"What to expect from this course"},{"location":"level101/linux_basics/intro/#what-is-not-covered-under-this-course","text":"We are not covering advanced Linux commands and bash scripting in this course. We will also not be covering Linux internals.","title":"What is not covered under this course"},{"location":"level101/linux_basics/intro/#course-contents","text":"The following topics has been covered in this course: Introduction to Linux What are Linux Operating Systems What are popular Linux distributions Uses of Linux Operating Systems Linux Architecture Graphical user interface (GUI) vs Command line interface (CLI) Command Line Basics Lab Environment Setup What is a Command File System Organization Navigating File System Manipulating Files Viewing Files Echo Command Text Processing Commands I/O Redirection Linux system administration Lab Environment Setup User/Groups management Becoming a Superuser File Permissions SSH Command Package Management Process Management Memory Management Daemons and Systemd Logs Conclusion Applications in SRE Role Useful Courses and tutorials","title":"Course Contents"},{"location":"level101/linux_basics/intro/#what-are-linux-operating-systems","text":"Most of us are familiar with the Windows operating system used in more than 75% of the personal computers. The Windows operating systems are based on Windows NT kernel. A kernel is the most important part of an operating system - it performs important functions like process management, memory management, filesystem management etc. Linux operating systems are based on the Linux kernel. A Linux based operating system will consist of Linux kernel, GUI/CLI, system libraries and system utilities. The Linux kernel was independently developed and released by Linus Torvalds. The Linux kernel is free and open-source - https://github.com/torvalds/linux Linux is a kernel and not a complete operating system. Linux kernel is combined with GNU system to make a complete operating system. Therefore, linux based operating systems are also called as GNU/Linux systems. GNU is an extensive collection of free softwares like compiler, debugger, C library etc. Linux and the GNU System History of Linux - https://en.wikipedia.org/wiki/History_of_Linux","title":"What are Linux operating systems"},{"location":"level101/linux_basics/intro/#what-are-popular-linux-distributions","text":"A Linux distribution(distro) is an operating system based on the Linux kernel and a package management system. A package management system consists of tools that help in installing, upgrading, configuring and removing softwares on the operating system. Software are usually adopted to a distribution and are packaged in a distro specific format. These packages are available through a distro specific repository. Packages are installed and managed in the operating system by a package manager. List of popular Linux distributions: Fedora Ubuntu Debian Centos Red Hat Enterprise Linux Suse Arch Linux Packaging systems Distributions Package manager Debian style (.deb) Debian, Ubuntu APT Red Hat style (.rpm) Fedora, CentOS, Red Hat Enterprise Linux YUM","title":"What are popular Linux distributions"},{"location":"level101/linux_basics/intro/#linux-architecture","text":"The Linux kernel is monolithic in nature. System calls are used to interact with the Linux kernel space. Kernel code can only be executed in the kernel mode. Non-kernel code is executed in the user mode. Device drivers are used to communicate with the hardware devices.","title":"Linux Architecture"},{"location":"level101/linux_basics/intro/#uses-of-linux-operating-systems","text":"Operating system based on Linux kernel are widely used in: Personal computers Servers Mobile phones - Android is based on Linux operating system Embedded devices - watches, televisions, traffic lights etc Satellites Network devices - routers, switches etc.","title":"Uses of Linux Operating Systems"},{"location":"level101/linux_basics/intro/#graphical-user-interface-gui-vs-command-line-interface-cli","text":"A user interacts with a computer with the help of user interfaces. The user interface can be either GUI or CLI. Graphical user interface allows a user to interact with the computer using graphics such as icons and images. When a user clicks on an icon to open an application on a computer, he or she is actually using the GUI. It's easy to perform tasks using GUI. Command line interface allows a user to interact with the computer using commands. A user types the command in a terminal and the system helps in executing these commands. A new user with experience on GUI may find it difficult to interact with CLI as he/she needs to be aware of the commands to perform a particular operation.","title":"Graphical user interface (GUI) vs Command line interface (CLI)"},{"location":"level101/linux_basics/intro/#shell-vs-terminal","text":"Shell is a program that takes commands from the users and gives them to the operating system for processing. Shell is an example of a CLI (command line interface). Bash is one of the most popular shell programs available on Linux servers. Other popular shell programs are zsh, ksh and tcsh. Terminal is a program that opens a window and lets you interact with the shell. Some popular examples of terminals are gnome-terminal, xterm, konsole etc. Linux users do use the terms shell, terminal, prompt, console etc. interchangeably. In simple terms, these all refer to a way of taking commands from the user.","title":"Shell vs Terminal"},{"location":"level101/linux_basics/linux_server_administration/","text":"Linux Server Administration In this course will try to cover some of the common tasks that a linux server administrator performs. We will first try to understand what a particular command does and then try to understand the commands using examples. Do keep in mind that it's very important to practice the Linux commands on your own. Lab Environment Setup Install docker on your system - https://docs.docker.com/engine/install/ We will be running all the commands on Red Hat Enterprise Linux (RHEL) 8 system. We will run most of the commands used in this module in the above Docker container. Multi-User Operating Systems An operating system is considered as multi-user if it allows multiple people/users to use a computer and not affect each other's files and preferences. Linux based operating systems are multi-user in nature as it allows multiple users to access the system at the same time. A typical computer will only have one keyboard and monitor but multiple users can log in via SSH if the computer is connected to the network. We will cover more about SSH later. As a server administrator, we are mostly concerned with the Linux servers which are physically present at a very large distance from us. We can connect to these servers with the help of remote login methods like SSH. Since Linux supports multiple users, we need to have a method which can protect the users from each other. One user should not be able to access and modify files of other users User/Group Management Users in Linux has an associated user ID called UID attached to them. Users also has a home directory and a login shell associated with them. A group is a collection of one or more users. A group makes it easier to share permissions among a group of users. Each group has a group ID called GID associated with it. id command id command can be used to find the uid and gid associated with an user. It also lists down the groups to which the user belongs to. The uid and gid associated with the root user is 0. A good way to find out the current user in Linux is to use the whoami command. \"root\" user or superuser is the most privileged user with unrestricted access to all the resources on the system. It has UID 0 Important files associated with users/groups /etc/passwd Stores the user name, the uid, the gid, the home directory, the login shell etc /etc/shadow Stores the password associated with the users /etc/group Stores information about different groups on the system If you want to understand each filed discussed in the above outputs, you can go through below links: https://tldp.org/LDP/lame/LAME/linux-admin-made-easy/shadow-file-formats.html https://tldp.org/HOWTO/User-Authentication-HOWTO/x71.html Important commands for managing users Some of the commands which are used frequently to manage users/groups on Linux are following: useradd - Creates a new user passwd - Adds or modifies passwords for a user usermod - Modifies attributes of an user userdel - Deletes an user useradd The useradd command adds a new user in Linux. We will create a new user 'shivam'. We will also verify that the user has been created by tailing the /etc/passwd file. The uid and gid are 1000 for the newly created user. The home directory assigned to the user is /home/shivam and the login shell assigned is /bin/bash. Do note that the user home directory and login shell can be modified later on. If we do not specify any value for attributes like home directory or login shell, default values will be assigned to the user. We can also override these default values when creating a new user. passwd The passwd command is used to create or modify passwords for a user. In the above examples, we have not assigned any password for users 'shivam' or 'amit' while creating them. \"!!\" in an account entry in shadow means the account of an user has been created, but not yet given a password. Let's now try to create a password for user \"shivam\". Do remember the password as we will be later using examples where it will be useful. Also, let's change the password for the root user now. When we switch from a normal user to root user, it will request you for a password. Also, when you login using root user, the password will be asked. usermod The usermod command is used to modify the attributes of an user like the home directory or the shell. Let's try to modify the login shell of user \"amit\" to \"/bin/bash\". In a similar way, you can also modify many other attributes for a user. Try 'usermod -h' for a list of attributes you can modify. userdel The userdel command is used to remove a user on Linux. Once we remove a user, all the information related to that user will be removed. Let's try to delete the user \"amit\". After deleting the user, you will not find the entry for that user in \"/etc/passwd\" or \"/etc/shadow\" file. Important commands for managing groups Commands for managing groups are quite similar to the commands used for managing users. Each command is not explained in detail here as they are quite similar. You can try running these commands on your system. groupadd \\ Creates a new group groupmod \\ Modifies attributes of a group groupdel \\ Deletes a group gpasswd \\ Modifies password for group We will now try to add user \"shivam\" to the group we have created above. Becoming a Superuser Before running the below commands, do make sure that you have set up a password for user \"shivam\" and user \"root\" using the passwd command described in the above section. The su command can be used to switch users in Linux. Let's now try to switch to user \"shivam\". Let's now try to open the \"/etc/shadow\" file. The operating system didn't allow the user \"shivam\" to read the content of the \"/etc/shadow\" file. This is an important file in Linux which stores the passwords of users. This file can only be accessed by root or users who have the superuser privileges. The sudo command allows a user to run commands with the security privileges of the root user. Do remember that the root user has all the privileges on a system. We can also use su command to switch to the root user and open the above file but doing that will require the password of the root user. An alternative way which is preferred on most modern operating systems is to use sudo command for becoming a superuser. Using this way, a user has to enter his/her password and they need to be a part of the sudo group. How to provide superpriveleges to other users ? Let's first switch to the root user using su command. Do note that using the below command will need you to enter the password for the root user. In case, you forgot to set a password for the root user, type \"exit\" and you will be back as the root user. Now, set up a password using the passwd command. The file /etc/sudoers holds the names of users permitted to invoke sudo . In redhat operating systems, this file is not present by default. We will need to install sudo. We will discuss the yum command in detail in later sections. Try to open the \"/etc/sudoers\" file on the system. The file has a lot of information. This file stores the rules that users must follow when running the sudo command. For example, root is allowed to run any commands from anywhere. One easy way of providing root access to users is to add them to a group which has permissions to run all the commands. \"wheel\" is a group in redhat Linux with such privileges. Let's add the user \"shivam\" to this group so that it also has sudo privileges. Let's now switch back to user \"shivam\" and try to access the \"/etc/shadow\" file. We need to use sudo before running the command since it can only be accessed with the sudo privileges. We have already given sudo privileges to user \u201cshivam\u201d by adding him to the group \u201cwheel\u201d. File Permissions On a Linux operating system, each file and directory is assigned access permissions for the owner of the file, the members of a group of related users and everybody else. This is to make sure that one user is not allowed to access the files and resources of another user. To see the permissions of a file, we can use the ls command. Let's look at the permissions of /etc/passwd file. Let's go over some of the important fields in the output that are related to file permissions. Chmod command The chmod command is used to modify files and directories permissions in Linux. The chmod command accepts permissions in as a numerical argument. We can think of permission as a series of bits with 1 representing True or allowed and 0 representing False or not allowed. Permission rwx Binary Decimal Read, write and execute rwx 111 7 Read and write rw- 110 6 Read and execute r-x 101 5 Read only r-- 100 4 Write and execute -wx 011 3 Write only -w- 010 2 Execute only --x 001 1 None --- 000 0 We will now create a new file and check the permission of the file. The group owner doesn't have the permission to write to this file. Let's give the group owner or root the permission to write to it using chmod command. Chmod command can be also used to change the permissions of a directory in the similar way. Chown command The chown command is used to change the owner of files or directories in Linux. Command syntax: chown \\ \\ In case, we do not have sudo privileges, we need to use sudo command . Let's switch to user 'shivam' and try changing the owner. We have also changed the owner of the file to root before running the below command. Chown command can also be used to change the owner of a directory in the similar way. Chgrp command The chgrp command can be used to change the group ownership of files or directories in Linux. The syntax is very similar to that of chown command. Chgrp command can also be used to change the owner of a directory in the similar way. SSH Command The ssh command is used for logging into the remote systems, transfer files between systems and for executing commands on a remote machine. SSH stands for secure shell and is used to provide an encrypted secured connection between two hosts over an insecure network like the internet. Reference: https://www.ssh.com/ssh/command/ We will now discuss passwordless authentication which is secure and most commonly used for ssh authentication. Passwordless Authentication Using SSH Using this method, we can ssh into hosts without entering the password. This method is also useful when we want some scripts to perform ssh-related tasks. Passwordless authentication requires the use of a public and private key pair. As the name implies, the public key can be shared with anyone but the private key should be kept private. Lets not get into the details of how this authentication works. You can read more about it here Steps for setting up a passwordless authentication with a remote host: Generating public-private key pair If we already have a key pair stored in \\~/.ssh directory, we will not need to generate keys again. Install openssh package which contains all the commands related to ssh. Generate a key pair using the ssh-keygen command. One can choose the default values for all prompts. After running the ssh-keygen command successfully, we should see two keys present in the \\~/.ssh directory. Id_rsa is the private key and id_rsa.pub is the public key. Do note that the private key can only be read and modified by you. Transferring the public key to the remote host There are multiple ways to transfer the public key to the remote server. We will look at one of the most common ways of doing it using the ssh-copy-id command. Install the openssh-clients package to use ssh-copy-id command. Use the ssh-copy-id command to copy your public key to the remote host. Now, ssh into the remote host using the password authentication. Our public key should be there in \\~/.ssh/authorized_keys now. \\~/.ssh/authorized_key contains a list of public keys. The users associated with these public keys have the ssh access into the remote host. How to run commands on a remote host ? General syntax: ssh \\@\\ \\ How to transfer files from one host to another host ? General syntax: scp \\