Software testing is a set of processes and tasks that take place throughout the software development life cycle. It helps to reduce the risk of failures that may occur during operational use and, thus, ensure the quality of the software system.
Objectives of software testing
· Preventing defects from entering the system.
· Finding defects existing in the system.
· Measuring the quality of the system.
1.DevOps
Yes. High deployment frequencies are possible and leading IT organizations do it today.
· FlickR - The popular image and video hosting portal deploys release updates to their applications everyday.
· Facebook - The most used social networking site, deploys an average of 2 releases everyday.
· Amazon - The world's largest internet company by revenue, does an average of 50,000,000 code deployments per year.
They do it with the help of DevOps.
Continuous Testing (CT): Continuous Testing is the process of executing automated tests as part of the software delivery pipeline in order to obtain rapid feedback on the business risks associated with a software release candidate
PipeLines :
international telecommunications and television company
In this case study we will look at how DevOps was implemented for an international telecommunications and television company. This client is one of the largest broadband internet service providers outside of the United States
Infosys helped them to successfully build overall testing governance structure using DevOps practices.
Before DevOps implementation:
With deployment frequencies as high as atleast 2 per week, it was challenging due to:
· Total integration time, from code deployment to test readiness taking 5 weeks
· Manual regression tests to test every deployment.
· Manual smoke tests required to ensure all the services are up and running on different environments.
Tools used:
· Jenkins: For automated triggering of CI-CD pipeline processes
· Apache Subversion: For automated version control of code base and automated test scripts
· Apache Ant: For automated software build processing
· JIRA:For defect management
· Selenium:For functional test automation
· Apache Jmeter: For performance testing
Continuous integration: The most dominant player in the ‘Three Cs’ is Continuous integration (CI) and it’s a necessary approach for any Agile team. CI requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.
By integrating regularly, teams can detect errors quickly, and locate them more easily. Simply, it ensures bugs are caught earlier in the development cycle, which makes them less expensive to fix - and maintains a consistent quality.
By integrating regularly, teams can detect errors quickly, and locate them more easily. Simply, it ensures bugs are caught earlier in the development cycle, which makes them less expensive to fix - and maintains a consistent quality.
Continuous delivery: Continuous delivery is the practice of streamlining/automating all the processes leading up to deployment. This includes many steps, such as validating the quality of the build in the previous environment (ex.: dev environment), promoting to staging, etc. These steps, done manually, can take significant effort and time. Using cloud technologies and proper orchestration, they can be automated.
Teams should ensure they have a monitoring dashboard for your production environment in place in order to eliminate performance bottlenecks and respond fast to issues. This will complete an efficient CD process.
Continuous testing: Continuous testing (CT), which can also be referred to as Continuous Quality, is the practice of embedding and automating test activities into every “commit”. CT helps developers use their time more efficiently when trying to fix a bug for code that was written years ago. To fix the bug, developers should first remind themselves of which code it was, undo any code that was written on top of the original code, and then re-test the new code; not a short process. Testing that takes place every commit, every few hours, nightly and weekly, not only increases confidence in the application quality, it also drives team efficiency.
2. Networking
When two or more machines are connected with each other, it is called a network and the devices in a network are called hosts.
Your classroom computer is actually part of Infosys network. All the computers in Infosys are connected with each other, That’s why you are able to send a mail to any Infosys mail Id.
What do you think is the use of LAN cable?
LAN cable is used to transmit data to and from the computer.
The amount of data that can be transmitted in a given period of time is called Bandwidth. It is measured in Mbps, Gbps, etc.
| Type of cable | Bandwidth |
| Twisted-Pair | 100 Mbps |
| Fiber-Optic | >10Gbps |
The LAN cable starts from your desktop. Do you know where it ends?
It ends in a Switch.
Switch has many ports. Each port can be connected with an individual network device
Check if your computer is directly connected to your friends computer. How does your data reach his computer?
· The message you send reaches the switch. The switch sends the message to your friends machine.
· A switch is a device which connects all the devices within a network. All your computers are connected to a switch!
Here we can see that all the computers are connected to a single switch, like a star. The layout of a network is called a topology and local networks use the star topology.
If your switch cannot send data beyond Infosys Mysore network, how can you send data to a machine in the supermarkets network?
A router is a device which is used to connect different networks.
Just the way a switch has ports, a router has interfaces through which other switches and routers are connected.
We can form a network of networks also. Internet is the largest network of networks! Here we can see that the routers are highly interconnected, like a mesh. Such a topology is called a Mesh topology. The mesh topology improves redundancy, as the data can reach the destination in a different route if some link fails.
IpAddress
Let’s say you want to send a parcel to your friend in USA. There are millions of houses all over the world. How can you uniquely identify your friends house?
Just the way you can uniquely identify your friends house by the unique house address, we can uniquely identify every device in the network by its address called the IP address!
Since the number of devices on the internet far exceeds the number that can be supported by IPv4, world is gradually adopting the IPv6 system
Find your IP address by typing ipconfig -all in command prompt.
Since available IP addresses are limited, organizations allocate IP addresses to machines temporarily and later deallocate them. A machine which does this is DHCP server. DHCP stands for Dynamic Host Configuration Protocol.
The below approach is used for trainee machines.
Find the duration of the temporary IP allocation through ipconfig -all
Domain Name
Domain Name is a name given to an IP address or a collection of IP addresses. No two organizations can have the same domain name.
www.infosys.com is the name given to the IP address of the computer within infosys.com domain, which stores the Infosys website
www.infosys.com is the name given to the IP address of the computer within infosys.com domain, which stores the Infosys website
DNS (Domain Name System) Server is a machine which has a database of domain names and the corresponding IP addresses.
DNS is used for Resolving names into IP address
Ping is a command which checks the connectivity with a specific machine. It also gives the IP address associated with a domain name
How can your device know if the destination IP address is within the same network?
If the network ID of the IP addresses are same, then it means both the addresses are in same network.
Subnet mask is a value which is used to separate out the network ID and the host ID.
For example, consider IP address 10.68.190.51
| Subnet mask | Network ID |
| 255.0.0.0 | 10 |
| 255.255.0.0 | 10.68 |
| 255.255.255.0 | 10.68.190 |
Note: Network ID is determined by a bitwise AND operation between IP Address and Subnet Mask.
Applying the subnet mask of 255.255.255.0, the Network ID of sender and receiver is 10.123.45. Hence, they both are in the same network.
Among all the machines in the same network how can your machine find out which is the receiver machine?
A similar thing happens in a network also. Switch sends a broadcast message to the rest of the devices
The machine with the IP address responds back, giving details of its MAC address. MAC is a unique value given to a computer.
This process is called ARP. ARP – Address Resolution Protocol
MAC – Media Access Control
A MAC address is also a unique value given to every device.
This process is called ARP. ARP – Address Resolution Protocol
MAC – Media Access Control
A MAC address is also a unique value given to every device.
If IP address itself can uniquely identify a computer, then why do we need a MAC address?
Different unique values are meant for different purposes. For example, your employee number though unique is relevant only within Infosys, whereas a unique passport number is relevant throughout the world.
Similarly, IP and MAC are meant for different purposes.
Similarly, IP and MAC are meant for different purposes.
Some uses of Mac are:
- Uniquely identify a machine when IP address is not available. For example, a DHCP server uses MAC address to assign the IP address.
- A IP address can change, for example, when you take your laptop from one DC to another. But, MAC is permanent. That is why it is also called the physical address of the system.
| IP Address | MAC Address |
| Unique | Unique |
| Example: 10.123.45.12 | Example: 01-23-45-67-89-ab |
| It can change. | It is permanent. It never changes. |
| Used to identify the network and the device | Used to identify the device |
| Switch does not understand IP address | Switch understands only MAC address and PORT. |
Now that your machine knows the MAC address of the receiver machine, it can now send the data.
The switch has a MAC table. The MAC table has a list of port numbers and MAC addresses. Depending on the MAC address passed, it will send information to that specific machine alone.
Your mobile is not connected to network through wires. Then how it is able to send and receive data? This is done through Wi-Fi.
Internet
- Start the web service from your command prompt
python CurrencyConverterService.py - Call the web service from your web browser
http://127.0.0.1:9999/currency_convert?amount_in_dollars=6 - Open another command prompt. Call the web service using python client
python CurrencyConverter_Python_client.py 6 - Open another command prompt. Call the web service using java client
java CurrencyConverter_Java_Client 6
Note 1: Ensure you are in the correct directory in the command prompt.
Note 2: To stop the server, press ctrl+c in the command prompt
3.Cloud Computing
Cloud computing means that all the computing hardware and software resources that you need to process your tasks are provided for you, "as a service" over the internet, by a vendor instead of you owning and maintaining them.
It is the responsibility of the vendor, the Cloud Service Provider (CSP), to develop, own and maintain these resources and make them available to the consumers over the internet.
You, the consumer, need not know exactly where the resources are located and how it all works.
Example
If you want to use an email service, you would need the hardware and software resources for
· an email server to send, receive and store your mails
· an email client to access the data and operations in your email server.
Instead, if you use a cloud based mail service like Gmail, Outlook, etc., all you need is a device, with an app or a browser, connected to the internet
A traditional enterprise IT set up (on-premise) would consist of the following layers.
Organisations are looking at cloud computing solutions like below
1.Infrastructure as a Service is the provisioning of IT infrastructure resources, like processors, storage, networks, firewalls, load balancers etc., over the internet.
2.Platform as a Service provides all of the capabilities that you need to support a complete application lifecycle - building, testing, deploying, managing, and updating – using the same integrated environment.
3.Software as a Service describes any cloud service where a fully functional and complete software product is delivered to users over the internet. Instead of installing and maintaining software, you simply access it via the Internet, freeing yourself from complex software and hardware management.
Types of Cloud Platform plans
Each deployment model aims at addressing one or more concerns of the cloud consumer. Therefore, it is very important that consumers prioritize their concerns before opting for a particular model.
1. Public cloud
2. Private cloud
3. Community cloud
4. Hybrid cloud
1.Public Cloud is one that is available for use by the general public. Hence, it is the most common and popular deployment model available today. They are entirely owned, deployed, monitored, and managed by the cloud service provider, who deliver their computing resources over the internet.
Example
· DropBox provides storage space to the general public.
· Google provides Gmail and other cloud servers to the general public
2. Private Cloudis available only to users within a single organization.
Concerns addressed
· Security
· Compliance
· Governance/Control
· Performance
3.A community cloudis a private cloud that is shared by two or more organizations having shared concerns like security requirements, policy, and compliance considerations etc.
4.A hybrid cloudis a combination of two or more different cloud deployment models.
In Real-World
As a software services professional, you might get to work on cloud computing in one of the following ways
1. Cloud implementations
2. Cloud based developments
3. Migration projects
Misconception 3: Cloud computing is the same as virtualization
Though virtualization enables logical (not physical) separation of shared hardware resources, it is only an enabler for implementing cloud and not a mandatory requirement. Physically separate resources, like different makes and models of mobile devices, can also be hosted on cloud, without virtualization, to be used by developers and testers.
4.Jenkins
whenever developers create a build, testers will execute these test cases and scripts using its own frame works and the results will be saved separately for UFT, Selenium and IDTW.
cd D:\DevOps
d:
java -jar jenkins.war
admin /admin
5.Vagrant
“Create and configure lightweight, reproducible and portable environments.”
Vagrant - the command line utility for managing the lifecycle of virtual machines
Vagrant, an open-source software product for building and maintaining portable virtual development environments. Written in: Ruby
The Hashicorp Repository Contains More Than 10,000 Boxes !
Alternatives : Docker, CLI Tools, Terraform
Vagrant is a tool for building and managing virtual machine environments in a single workflow. With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases production parity, and makes the "works on my machine" excuse a relic of the past.
Introduction
Every developer has faced problems when it comes to setting up a development environment. Usually the environment behaves as it should on one machine, while on another machine it behaves differently or does not function at all.
Vagrant changes the way how developers setup their work environments and maintain them. Vagrant makes it possible to create a configurable portable development environment easily. These environments need to be configured in a so-called Vagrantfile to get completely configured.
In this Vagrantfile file, the developer specifies how the environment should be set up, configured, which software should be installed and which operating system should be used. This Vagrantfile can then be distributed among other developers who just need this file in order to set up the same development environment on their own machine. Vagrant will then follow every step as defined in the provided Vagrantfile and initialise the machine.
· Vagrant encourages automation to set up your development environments using shell scripts or configuration management software.
· Vagrant allows you to work with the same operating system that is running in production, whether your physical development machine is running Linux, Mac OS X, or Windows.
If you were to run the virtual development environment manually — without Vagrant’s help, that is — you would have to follow these steps:
1. Download the VM image.
2. Start the VM.
3. Configure the VM’s shared directories and network interfaces.
4. Maybe install some software within the VM.
With Vagrant, all these tasks (and many more) are automated. The command $ vagrant up can do the following (depending on the configuration file):
• Automatically download and install a VM, if necessary
• Boot the VM
• Configure various resources: RAM, CPUs, network connections, and shared folders
• Install additional software within the VM by using tools such as Puppet, Chef,
· Ansible, and Salt
Architrcture
Vagrant sits on top of existing and well-known virtualization solutions such as VirtualBox, VMWare Workstation, VMWare Fusion, and Hyper-V; and provides a unified and simple command-line interface to manage VMs. To work with Vagrant, you have to install at least one provider
In Vagrant terminology, a provider is a software virtualization solution such as VirtualBox, VMWare Fusion, or VMWare Workstation.
Commands
$ vagrant init [url]
$ vagrant up
$ vagrant halt
$ vagrant destroy [--force]
$ vagrant reload
$ vagrant ssh
$ vagrant status
Working
Set the Proxy on cmdline
set http_proxy=http://10.219.2.220:80
set https_proxy=http://10.219.2.220:80
Installation
2. After download, just run the binary and install it.
4. Again, just run the binary to install it.
2.Vagrant is a command-line based tool. Once installation is complete, open a console window and create a new directory called 'vagrant_intro’ to workwith new Vegrant baox
cd ~
mkdir vagrant_intro
cd vagrant_intro
3. To add a box, goto box repository: https://app.vagrantup.com/boxes/searchRun this command:
$ vagrant box add <name>
$ vagrant box add ubuntu/trusty64
This will download the box named "hashicorp/precise64" from HashiCorp's Vagrant Cloud box catalog .
In the above command, you will notice that boxes are namespaced. Boxes are broken down into two parts - the username and the box name - separated by a slash
4.To create an environment, inside your folder run init command, it will create ‘vagrantfile’
vagrant init ubuntu/trusty64
It will downloads the Ubuntu Box into our local mechine, we can check the downloaded Box by going this location Windows : C:\Users\<Username>\.vagrant.d\boxes Linux/Mac: ~/.vagrant.d/boxes
The generated 'Vagrantfile' is a Ruby file that controls your [one or more] virtual machines.
A 'Vagrantfile'has been placed in this directory. You are now ready to 'vagrant up' your first virtual environment! Please read the comments in the Vagrantfile as well as documentation on'vagrantup.com' for more information on using Vagrant.
5.Start the Environmet
$ vagrant up
6. connect to Environment
$ vagrant ssh
To Set Shared folder, edit vagrantfile as
config.vm.synced_folder "D:\\DevOps\\Instl\\VagrantBoxes\\SyncFolder", "/vagrant"
we named our syncd folder as “vagrant”, you can find the files in Syncfolder by going /vagrant/
Projects
References
Installation : https://youtu.be/nZrQsxCPT2s
6. Microsoft Azure
Azure Scenario
Tisco, an IT and networking firm, develops, manufactures and sells networking hardwares to markets across globe. The organization is currently maintaining their own infrastructure but due to the massive growth of the organization in the global market, the organization is more keen towards increasing the manufacturing unit rather than expanding the existing infrastructure. Some of the challenges faced by the current infrastructure are as follows:
1. Exponential data growth is demanding more storage space resulting more maintenance cost.
2. Often resulting in application/database downtime while upgrading the server because of demanding resource.
3. 24*7 power supply and manpower is required hence adding to the cost.
4. Taking periodic backups of the critical servers and recovering it during fail over is complex.
In order to overcome the above challenges, Tisco has decided to gradually move some of their new deployments to Microsoft Azure.
Azure Introduction
Microsoft Azure is Microsoft's public cloud offering with a wide set of cloud services that gives the flexibility to build, manage and deploy infrastructure and application on a massive, global network using diverse technology and frameworks.
Azure service offerings fall into all the three categories:
· Infrastructure as a Service (IaaS): IaaS offers virtualized servers and network infrastructure components so that users can easily provision and decommission as required
· Platform as a Service (PaaS): PaaS offers resources/platform on which developers can build their own solutions.
· Software as a Service (SaaS): SaaS offers complete software applications that are delivered as a cloud-based service. SaaS basically enables the users to easily access applications without the need to install them.
1.Creating Azure Account
1.Log on to Azure site and click on the "Start Free" button.
2.Create a Microsoft live id(For example, Outlook, hotmail, etc.)
3.Once you log in, fill in the required details
4.Access the Portal. Your portal is as follows.
5.To Launch Azure in powershell , run following commands
Install-Module Azure
Import-Module Azure
Add-AzureAccount
6.It will asks for email/password , once connect – it will show the Azure Account details
PS C:\windows\system32> Add-AzureAccount
Id Type Subscriptions Tenants
-- ---- ------------- -------
PS C:\windows\system32>
7.To Know all subscription deatils , use - Get-AzureSubscription
8.To run powershell scripts we must ByPass the ExecutionPolicy as below
2.Terminalalogies
The infrastructure of an application on Azure is made up of many components like virtual machine, storage account, web app, etc. Some of the terminologies used to referring these components are as follows
Resource: Resource in Azure are manageable items that are present in Azure, such as, virtual machines, networks, databases, web app etc
Resource groups : Resource group is a logical container that holds related resources for an entire application. While creating/deploying resources on Azure, you have to specify the resource group that has to be used for storing the resource.
Resource provider: Resource provider is a service that supplies the resources which you can deploy and manage through Resource Manager. For example, Microsoft.compute is one of the common resource provider which supplies virtual machine resource.
Some benefits of Resource Manager
· Deploy, manage, and monitor all the resources for a solution as a group
· Define the dependencies between resources to deploy them in the correct order.
· Organization's billing is transparent with the help of tags.
· Deploy solutions consistently throughout the development life cycle.
learn to create a resource group
1.Log in to the Azure portal.
2.Navigate to the "Resource group"option towards the left pane.
3:Click the "Add" option in the top middle pane to create a new resource group.
Fill in the details as follows:
· Resource group name: tisco-rg
· Subscription: Free trial
· Resource group location: East US
You will observe that the resource group tisco-rg created as below
Tisco Project Raodmap
Tisco has decided to gradually migrate their existing infrastructure and also the new deployments to Microsoft Azure.
You will be performing the following tasks to achieve this
Azure Storage
Requirement : Tisco has huge amount of data stored in their file servers and in other object storages. As a part of their migration plan, Tisco wants to move the data to a storage which can accomodate terabytes of unstructured data.
Solution : Azure storage account can be used to store huge amount of data as files and also as objects.
Azure Storage provides data storage solution on cloud. It is highly scalable, elastic, globally accessible and automatically load-balances application-data based on traffic.
Azure Storage provides the following services:
· Table: Stores NoSQL data as key-attribute pair
· Blob: Stores unstructured object data on Azure like images, videos, documents
· Queue: Provides a reliable messaging solution for asynchronous communication between loosely coupled application components
In order to ensure high availability and protect your data, the data must be replicated, either within the same data center or to another data center, depending on the replication option you choose.
Replication
You can choose one of the following replication options:
- Locally Redundant Storage (LRS): Maintains three copies of data within the same data center in a single region
- Geo Redundant Storage (GRS): Maintains six copies of data, three copies in the primary region and three copies in the secondary region
- Read-Access Geo Redundant Storage (RA GRS): Similar to GRS. Only read permissions assigned to the data stored in secondary region
create a a general purpose storage account
Step 1: Log in to the Azure portal.
Step 2: Navigate to the "Resource group" and open the tisco-rg and "Add" , search for "Storage account", click on it & create
Step 3: Fill in the details as follows and Click on "Create":
· Storage Account Name: tiscostore
· Location: East US
· Replication: LRS
· Resource group: tisco-rg
Azure Virtual Network (VNet)
Azure Virtual Network (VNet) is a representation of a network in the cloud. It enables Azure Virtual machines to communicate with each other, the internet and the on-premises network. Virtual Networks can be segmented into multiple Subnets
Azure Virtual Network provides the following key capabilities:
· Isolation and segmentation: Within each Azure subscription and Azure region multiple virtual networks can be configured. Each virtual network is isolated from other virtual networks.
· Communicate with the internet: By default, all resources in a virtual network can communicate outbound to the internet. For inbound communication with a resource, a public IP address(A public IP address is a resource with its own configurable settings) has to be assigned to it.
· Communicate between Azure resources: Azure resources communicate securely with each other.
· Communicate with on-premises resources: Azure VNet's can be integrated with on-premises resources.
· Filter network traffic: Inbound and out bound traffic through Azure VNet an be customized
· Route network traffic: Azure routes traffic between subnets, connected virtual networks, on-premises networks, and the Internet, by default. Traffic can be routed with azure route table or it can also be user defined
IP addresses can be assigned to Azure resources so as to communicate with other Azure resources, with the on-premises network and the internet. IP addresses can be of two types in Azure
Step 1: Login to Azure Portal.
Step 2:Create a new resource by navigating to "Create a resource".
Step 3:Provide the parameters for creating the Azure Virtual Network as follows.
Name: tisconet
Address space: 10.1.0.0/16
Resource group: tisco-rg
Location: East US
Subnet: default
Address space: 10.1.0.0/16
Resource group: tisco-rg
Location: East US
Subnet: default
Step 4:You will observe that the Azure Virtual Network is created as below.
Azure Virtual Machines
Azure Virtual Machines gives you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it.
Azure Compute
Hardware / Resoures required to run our code
We have following Azure Compute Options
1.Virtual Machines
· Linux or Windows
· Prebuilt images
· Varying sizes
· Premium Storage
· You manage the operating system
2.Container
· Lightweight application hosts
· Chain images together
· Docker client support on Windows
3.Cloud Services
· Web / Worker Roles
· Package application code
· Declared target operating system
· Azure Service Fabric managed
4.web apps
· Web application code
· IIS hosting at scale
· Source control integration for CI
· Web Jobs for background processing
Cloud computing offers dynamic provisioning of resources based on demand, on a pay-as-you-usepricing. Instead of physical servers, cloud computing helps to spin out virtual servers. With dynamic scaling and load balancing features of cloud, long term planning is not necessary.
Why Cloud?
Amazon.com's Great Indian Festival or Flipkart's Big Billion Day, they also declare intermittent offer days.
Continuous Delivery and Cloud
1. History
Amazon Web Services(AWS) is a low cost cloud service platform from Amazon, which provides services such as compute, storage, networking, CDN services etc to users. All AWS services are exposed as web services accessible from any where, any time, on a pay per use pricing model.
AWS services can be managed through a web based management console, command line interface (CLI) or software development kits (SDK). With AWS, you can provision resources in seconds and build applications without upfront capital investment.
Choosing Region
Region
· is a physical location spread across globe to host your data
· In each region, there will be atleast two availability zones for fault tolerance
· Regions are completely separate from one another
· Enterprises can choose to have their data in a specific region
Availability zones
· Availability zones are anologous to clusters of datacenters
· Availability zones are connected through redundant low-latency links
· These AZs offer scalable, fault tolerant and highly-available architecture
Most of the AWS services are region dependent and only a few are region independent. Few services may not be available in all the regions. So, while determining a region to push the workloads, the following parameters to be considered.
· Availability of required services
· Cost
· Latency
· Security&Compliance
· Service Level Agreements(SLAs)
2.Services
55 services currently available from AWS.The following are the various categories of services offered by Amazon Web Services (AWS).
AWS CLI: AWS Command Line Interface
AWS CLI: Made for Operations Engineers morethan Developers
· Great for Shell Scripting
· Feature Complete with Console & SDKs
· Interact with Any Service
AWS SDK: Software Development Kits
AWS provides SDK’s for all programming languages for implementing application inside AWS. For example the AWS SDK for Java is a collection of tools for developers creating Java-based Web apps to run on Amazon cloud components such as Amazon Simple Storage Service (S3), Amazon Elastic Compute Cloud (EC2) and Amazon SimpleDB
Some of the AWS SDK api’s are
· Java
· .Net
· Node.js., etc
AWS Access Key gives access for SDK & CLI
Example : Pizza Application
Local System:
· Nodejs
· Application Hosting > EC2
· Images/Assets in website > S3
· User Registrtions Database > RDS
· User Sessions Storage Cache > ElastiCache
· Saving Pizza’s NoSQL Db > DynamoDB
Installing AWS Commandline Interface
1. Go to https://aws.amazon.com/cli/ follow the installation steps on right side menu, in windows it is
2.After installation check by running >aws –versioncommand on cmd
C:\Users\kaveti_S>aws --version
aws-cli/1.16.17 Python/3.6.0 Windows/10 botocore/1.12.7
3. Creating and Initializing an AWS Account
4. An AWS Access Key is required for configure the CLI in local system& SDK
· Top menu > Your name > Security Credencials >Continue to Security Credencials
· Expand AccessKeys > Create New Access Key > Copy & Download file(rootkey.csv)
Access Key ID :AKIAJJAJRA4MN3H62CCA
Secret Access Key :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
5.Configure CLI In Local System
· Type “aws configure” on command prompt & provide required details
· To test the configuration use, aws ec2 describe-instances
3. CloudWatch & IAM
1.CloudWatch
Cloudwatch is a Service to set alarms based on Service metric thresholds. That means send a notification when a perticlar event is occurred using Simple Notification Service(SNS).
Examples of CloudWatch Alarm Actions
· Send Notification via SNS
· Trigger AutoScaling Action
· Trigger EC2 Action
Task 1 : Configuring Simple Notification Service
1. Set Region as US East (N. vergina)
2.Go to services, select SimpleNotificationService(SNS) > Getting Strarted
3. Create New Topic > Topic Name: admin_email ; Display Name : Email SNS
4.Subscriptions > Create Subscription > Protocal : Email, EndPoint: myemail@gmail.com
5.A confirmation mail is sen to your mail id. Check & Confirm subscription.
Task 2 : Creating a CloudWatch Alarm : “Billing Alaram”
1.Go To Top Menu > Your name > My Billing DashBorad > Preferences > Check[] box for
2.Go to Services > Cloudwatch > Left Menu : Alarams > Create Alaram
· Select metric > Total Estimated Charge > Check USD >Set Alaram Threshold
· Set Actions & create Alaram
3.Refresh it, Billing alaram is created
2.IAM - Identity & Access Management
Identity & Access Management (IAM) Service to configure authentication and access for user accounts
By using IAM, we can manage
· Passwords
· Multi-Factor Authentication
· Access Keys
· SSH Keys
Multi-Factor Authentication(MFA)
Authentication that requires more than one factor to authenticate
Task 1 : Securing Your AWS Account with MFA
1.Go to Top > Your name > My Security Credencials > Select : Multi-factor Authentication > Activate
2.Select [.] Virtual device(Your Smartphone) & install AWS MFA-compatible app on the smartphone. So Install MFA apps like Google Authenticator, open it.
3.Scan the QR code using the App, add the two Conseqgutiive keys from app & Finish
4.Signout from aws, try to re-login, it will asks for MFA code. That’s it!!
IAM Policy
Used to manage perimissions for different groups
Root Account Permissions
· Administer all services
· Modify billing information
· Manage users
IAM Policy Statement Properties
· Effect : “Allow”or“Deny”
· Action: Operation user can perform on services
· Resource: Specific Resources user can perform action on
To check go to My Security Credencials > Policies > Check AdministratorAccess
1.To Create user, Go to Servies > IAM >Add User > username:[ ] > Create User
2.To Add user to Group, Add new Group> Choose Group name > Add
3.It will create Access Keys for user, go to command line add user keys to system
Password Policy
· Require at least one uppercase letter
· Require at least one lowercase letter
· Require at least one number
· Require at least one non-alphanumeric character
· Allow users to change their own password
To set these options Go to My Security > Account Settings > Password Policy
Task 2 : Add password to Creted user
Iam > Users > user : smlcodes > Security credentials Tab > Console password : Manage
IAM users sign-in link:https://204050178648.signin.aws.amazon.com/console
Finally we completed with Security Things
4. EC2 & Virtual Mechines
Amazon EC2(Elastic Compute Cloud) provides users with the means of creating instances. These instances are virtual machines that are created on the AWS infrastructure.
The EC2 provides users with the opportunity to choose among several varieties of operating systems, RAMs and CPUs.
You can imagine EC2 as a computer which is maintained by someone else.
1.Virtual Private Cloud
Amazon Virtual Private Cloud (Amazon VPC) enables you to launch Amazon Web Services (AWS) resources into a virtual network that you've defined.
A "Virtual Private Cloud" is a sub-cloud inside the AWS public cloud. Sub-cloud means it is inside an isolated logical network. Other servers can't see instances that are inside a VPC. It is like a vlan AWS infrastructure inside a vlan.
Task : Creating a Virtual Private Cloud
We are now going to create below archetecutre in AWS
1.Go to Sevices > Search > VPC
2.Click on VPC > Create VPC > fill details & Create
3. VPC is created Successfully.
Now VPC is just created, but for instances of the VPC has no access with Internet by default. We need to configure Routing table for the Internet access.
4.To Configure VPC,Go to VPC > Left: Your VPC’s > Select : Pizza VPC > Summary: Routetable
By Clicking Route Table link, it will open new tab, and select Route Table Id > Routes Tab
5.Next, Create Subnet, VPN >Left: Subnet > Create Subnet
6.Next, create Route Table, VPN >Left: Route Table > Create Route Table
7.To access Internet, we must create Internet gateway : Internet gateways > Create internet gateway
8.To add Internet gateway to VPC, go to subnets > Pizza-Public-Subnet>Routing table > Edit > add another route & save details
8.So now we need to create another subnet in different region for replica purpose.
· Go to VPCs > Edit CIDRs > Add IPv4 CIDR > 100.64.0.0/24 > save
· Go to VPC > Subnets > Create subnet > Fill Details > Save
2.Elastic Cloud Compute (EC2)
EC2 is a webservice that enables you to launch and manage Linux/Unix/Windows operating system instances in Amazon data centers.
1.EC2 Instance Parameters
· CPU
· Memory
· Storage
· Network
2.EC2 popular OS’s
· Linux (Amazon, Red Hat, Ubuntu, etc)
· Windows
3. Amazon Machine Image (AMI)
· Operating System which is installed on EC2 is called as AMI
· Operating System + Software installed on EC2 instance
· Examples of AMI are
§ Anti-Virus Scanners Network Firewall
§ Business Intelligence Software
EC2 requires Storage space for Uploding & Saving files. For that they provide default service as “Elastic Block Store”
4.Elastic Block Store
· Independent storage volumes used with EC2 instances
Task : Creating an EC2 Instance
Go to Services > EC2 > Create Instance >Launch Instance.
1: Choose an Amazon Machine Image (AMI)
2: Choose an Instance Type
3: Configure Instance Details
4: Add Storage
5: Configure Security Group & Launch
All Traffic
6: Important !! Download Key pairs
Task 2 : Connecting to an EC2 Instance & Deploying our Application
2.Install npm packages & Start npm
3.Go to EC2 > Running Instances > Click : 1 Running Instances, see there is no Public DNS (IPv4)
4.To connect with our EC2 instance form our local machine, we need to configuere Public DNS(Public IP Address).
5.Elastic IP instace will manages the Public IP addresses that are creation, destroyed, and assigned.
· EC2 > Left: NETWORK & SECURITY > Elastic IPs > Allocate new address > Create
· Now, Select Elastic IP > Actions > Associate Address > Click Associate
· If you check the Public ip Address in running instances, it is not empty !
Task 3 : Connect to the EC2 Instance via SSH
Task 4 : Connect to the EC2 Instance via Putty from Windows
1.Open PuttyGen > Load > Select Perm key > Save Private Key
2. Open Putty, Host : Enter Public DNS (IPv4)
3.Connection > SSH > Auth > Locate Prieve key
Task 5 : Installing PizzaApp
1. Installing node.js on Instance
sudo yem update
curl –location https://rpm.nodesource.com/setup_6.x | sudo bash -
sudo yum -y install nodejs
node -v
2.Using WinScp, move PizzaApp code to /PizzaApp folder & run following commands
npm install
npm start
3. Scalling Application
Create an AMI from the EC2 Instance
EC2 > Actions > Image > Create Image > [Done]
To view Created image : Go to Images > AMI
Create Load Balanacer
Load balancing is the process of distributing network traffic across multiple servers. This ensures no single server bears too much demand. By spreading the work evenly, load balancing improves application responsiveness
Creating Load Balancer for PizzaApp
1.EC2 > left : LOAD BALANCING > Load Balancer > Create Load Balancer
5. S3 (Simple Storage Service)
S3 stores data in the form of Objects in the Region you specify
S3 Object = File + Metadata; S3 Bucket = Collection of Objects
S3 Bucket Example
· Bucket Name: pizza-luvrs
· Region: Oregon
· URL: s3-us-west-2.amazonaws.com/pizza-luvrs
S3 Object Key Example
Path & filename is considered as KEY, key is unique
· Filename: dog.png
· Folder Name: images
· Object Key: images/dog.png
Task1: Create S3 Bucket for storing Pizza images
1.S3 > Create Bucket > Name : pizzaimgs
2.Add prtmissions to make this bucket as public. For this use https://awspolicygen.s3.amazonaws.com/policygen.html & Copy JSON
3.Go to S3 Bucket > Permissions > Bucket Policy : Paste JSON > save
6. Databases (RDS & DynamoDB)
Relational Database Service
Managed database instances in AWS running on EC2
· Software Upgrades
· Nightly Database Backups
· Monitoring
RDS Backups
Multi-AZ Deployment Database replication to different Availability Zone
Automatic failover in case of catastrophic event
· Occurs daily
· Configurable backup window
· Backups stored 1 - 35 days
· Restore database from backup
RDS Database Options
Task 1 : Creating a Database in RDS
1.Services > RDS > Create DataBase > Ex. PostgreSQL >
7. CloudFormation
CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This file serves as the single source of truth for your cloud environment.
AWS Certified DevOps Engineer
1.AWS CodeCommit
AWS CodeCommit is a secure, highly scalable, managed source control service that hosts private Git repositories. AWS CodeCommit eliminates the need for you to manage your own source control system or worry about scaling its infrastructure. You can use AWS CodeCommit to store anything from code to binaries.
>Get-AWSCredentials -ListProfiles
>Set-AWSCredentials -AccessKey AKIAJBJRAI3S3TZEETFA -SecretKey yGD05dNukCMt+oQw+so5
-StoreAs codecommit
>cd 'C:\Program Files (x86)\AWS Tools\CodeCommit\'
> .\git-credential-AWSS4.exe -p codecommit
IAM Sign in URL : user/123abc****@
To create Repository
Services > CodeCommit > Create Repo > Name : “FirstRepo” > Create
Clone Repo:
git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/FirstRepo
2. AWS CodeDeploy
3.AWS CodePipeline
4.CloudFormation
5.Elastic BeanStalk
6.OpsWorks
AWS OpsWorks is a configuration management service that helps you build and operate highly dynamic applications, and propagate changes instantly.
7. CloudWatch
Links
Jenkins : https://lex.infosysapps.com/
Errors and Solutions
Vagrant
Error: Could not resolve host: vagrantcloud.com ?
set http_proxy=http://10.219.2.220:80
set https_proxy=http://10.219.2.220:80
apt-get update Error ?
sudo http_proxy=http://10.219.2.220:80 apt-get update
0 Comments
Post a Comment