Configuring and Viewing Audit Logs in Stackdriver

n this lab, you investigate Google Cloud Audit Logging. Cloud Audit Logging maintains two audit logs for each project and organization: Admin Activity and Data Access. Google Cloud Platform services write audit log entries to these logs to help you answer the questions of “who did what, where, and when?” within your Google Cloud Platform projects.

Objectives

 

Task 1. Enabling data access audit logs

Data Access audit logs (except for BigQuery) are disabled by default. You will first enable all audit logs. Logging charges for the volume of log data that exceeds the free monthly logs allotment. All logs received by Logging count towards the logs allotment limit, except for the Cloud Audit Logging logs that are enabled by default. This includes all GCP Admin Activity audit logs, System Event logs plus GCP Data Access audit logs from Google BigQuery only.

  1. On the Google Cloud Platform menu, click Activate Google Cloud Shell ( d267ae9c838005ec.png)to open Cloud Shell. If prompted, click Start Cloud Shell.
  2. Once Cloud Shell is fully open, click on the pencil icon to open the Cloud Shell code editor and Cloud Shell SSH interface in a new tab.
  3. In Cloud Shell, run the following command to retrieve the current IAM policy for your project and save it aspolicy.json:
gcloud projects get-iam-policy $DEVSHELL_PROJECT_ID \
--format=json >./policy.json
  1. In the Navigator pane, click on the policy.json file to open it in the editor.
  2. Add the following text to the policy.json file to enable data Access audit logs for all services. This text should be added just after the first { and before"bindings": [ (Be careful not to change anything else in the file).
   "auditConfigs": [
      {
         "service": "allServices",
         "auditLogConfigs": [
            { "logType": "ADMIN_READ" },
            { "logType": "DATA_READ"  },
            { "logType": "DATA_WRITE" }
         ]
      },
   ],

The file will look similar to below:

c721e1b6287e7566.png

  1. In Cloud Shell, run the following command to set the IAM policy:
gcloud projects set-iam-policy $DEVSHELL_PROJECT_ID \
./policy.json

The command will return and display the new IAM policy.

Task 2. Generating some account activity

In Cloud Shell, run the following commands to create a few resources. This will generate some activity that you will view in the audit logs.

gsutil mb gs://$DEVSHELL_PROJECT_ID
echo "this is a sample file" > sample.txt
gsutil cp sample.txt gs://$DEVSHELL_PROJECT_ID
gcloud compute networks create mynetwork --subnet-mode=auto
gcloud compute instances create default-us-vm \
--zone=us-central1-a --network=mynetwork
gsutil rm -r gs://$DEVSHELL_PROJECT_ID

Task 3. Viewing Admin Activity logs

Admin Activity logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources. For example, the logs record when VM instances and App Engine applications are created and when permissions are changed. To view the logs, you must have the Cloud Identity and Access Management roles Logging/Logs Viewer or Project/Viewer.

Admin Activity logs are always enabled so there is no need to enable them. There is no charge for your Admin Activity audit logs.

Using the Activity page

You can view abbreviated audit log entries in your project’s Activity page in the GCP Console. The actual audit log entries might contain more information than you see in the Activity page. The Activity page is good for a quick check of account activity.

  1. Switching to the browser tab showing the GCP console, select Navigation menu > Home.
  2. Click on the ACTIVITY button near the top left.

At the top of the activity table, you will see the activity you just generated:

6f38d7067538c77.png

  1. If you do not see the activity, reload the page.
  2. If the Filter pane is not displayed on the right, click theFilter button on the top right.
  3. In the filter pane, click on the Activity types, clickSelect all, and click OK.
  4. In the filter pane, click on the Resource type, UncheckSelect all checkbox, select GCE Network, and click OK. The Activity table now only shows the network that was created at the start of the lab.
  5. Feel free to explore other filters to help locate specific events. Filters can help locate events or to verify which events occurred.

Using the Stackdriver Logging page

  1. From the GCP console, select Navigation menu >Stackdriver > Logging.
  2. Click the down arrow in the Filter by label or text search field and select Convert to advanced filter.
  3. Delete the contents of the advanced filter field.
  4. Paste the following in the advanced filter field and replace PROJECT_ID with your project ID. You can copy the PROJECT_ID from the Qwiklabs Connection Details:
logName = ("projects/[PROJECT_ID]/logs/cloudaudit.googleapis.com%2Factivity")
  1. Press the Submit Filter button.
  2. Configure View Options to Show newest logs first.
  3. Locate the log entry for when the cloud storage was deleted:

fbb0990203774ae6.png

  1. Within that entry, click on the Cloud Storage text and select Show matching entries.
  2. Notice a line was added to the advanced filter to show only storage events:

a0f0a1e33810315d.png

You should now see only the cloud storage entries.

  1. Within the entry, click on the delete text and selectShow matching entries.
  2. Notice another line was added to the advanced filter and now you can only see storage delete entries.

This technique can be used to easily locate desired events.

  1. Expand the Cloud Storage delete entry and then expand the protoPayload field.
  2. Expand the authenticationInfo field and notice you can see the email address of the user that performed this action.
  3. Feel free to explore other fields in the entry.

Using the Cloud SDK

Log entries can also be read using the Cloud SDK command:

Example (Do not copy):

gcloud logging read [FILTER]
  1. Switch to the browser with the Cloud shell.
  2. Use the following command to retrieve just the audit activity for when storage buckets were deleted:
gcloud logging read \
"logName=projects/$DEVSHELL_PROJECT_ID/logs/cloudaudit.googleapis.com%2Factivity \
AND protoPayload.serviceName=storage.googleapis.com \
AND protoPayload.methodName=storage.buckets.delete"

Task 4. Exporting Audit logs

Audit log retention

Individual audit log entries are kept for a specified length of time and are then deleted. The Stackdriver Logging Quota Policy explains how long log entries are retained. You cannot otherwise delete or modify audit logs or their entries.

Audit log type Retention period
Admin Activity 400 days
Data Access 30 days

For longer retention, you can export audit log entries like any other Stackdriver Logging log entries and keep them for as long as you wish.

Export audit logs

When exporting logs, the current filter will be applied to what is exported.

  1. From the Stackdriver Logging dashboard, set the filter to display all the audit logs by deleting all lines in the filter except the first one. Your filter will look like (your project ID will be different):
logName = ("projects/[PROJECT_ID]/logs/cloudaudit.googleapis.com%2Factivity")
  1. Press the Submit Filter button.
  2. Click the CREATE EXPORT button.
  3. Provide a Sink Name of AuditLogsExport.
  4. Set the Sink service to BigQuery.
  5. Set the Sink destination to Create new BigQuery dataset, name the dataset auditlogs_dataset, and click the CREATE button.
  6. Click the Create sink button.
  7. Read the message in the Sink created dialog and clickCLOSE.
  8. On the left side of the Stackdriver dashboard, click on the Exports option. This allows for export to be viewed or edited. You will see the export you just created.
  9. On the right side, click the button with three dots 4afb81217e3ca687.pngfor your export and select View Filter.

This will show the filter that was present when the export was created.

  1. Click CLOSE when done.
  1. In Cloud Shell, run the following commands to generate some more activity that you will view in the audit logs exported to BigQuery:
gsutil mb gs://$DEVSHELL_PROJECT_ID
gsutil mb gs://$DEVSHELL_PROJECT_ID-test
echo "this is another sample file" > sample2.txt
gsutil cp sample.txt gs://$DEVSHELL_PROJECT_ID-test
gcloud compute instances delete --zone=us-central1-a \
--delete-disks=all default-us-vm
gsutil rm -r gs://$DEVSHELL_PROJECT_ID
gsutil rm -r gs://$DEVSHELL_PROJECT_ID-test

Task 5. Using BigQuery to analyze logs

  1. Go to Navigation menu > BigQuery. If prompted, log in with the Qwiklabs-provided credentials.
  2. The Welcome to BigQuery in the Cloud Consolemessage box opens. This message box provides a link to the quickstart guide and lists UI updates.
  3. Click Done.
  4. In the left pane in the Resources section, click your project (this starts with qwiklabs-gcp-xxx) you should see an auditlogs_dataset dataset under it.
  5. Verify that the BigQuery dataset has appropriate permissions to allow the export writer to store log entries. Click on the auditlogs_dataset dataset, then click on the Share dataset. On the Dataset Permission page you will see the service account as Bigquery Data Editor member. If it’s not already listed, you can add service account under Add members and grant data editor role.

permission.png

  1. Click the Cancel button to close the Share Datasetscreen.
  2. Expand the dataset to see the table with your exported logs (click on dataset name to expand).
  3. Click on the table name and take a moment to review the schemas and details of the tables that are being used.
  4. Click the Query Table button.
  5. Delete the text provided in the Query editor window and paste in the query below. This query will return the users that deleted virtual machines in the last 7 days:
#standardSQL
SELECT
  timestamp,
  resource.labels.instance_id,
  protopayload_auditlog.authenticationInfo.principalEmail,
  protopayload_auditlog.resourceName,
  protopayload_auditlog.methodName
FROM
`auditlogs_dataset.cloudaudit_googleapis_com_activity_*`
WHERE
  PARSE_DATE('%Y%m%d', _TABLE_SUFFIX) BETWEEN
  DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY) AND
  CURRENT_DATE()
  AND resource.type = "gce_instance"
  AND operation.first IS TRUE
  AND protopayload_auditlog.methodName = "v1.compute.instances.delete"
ORDER BY
  timestamp,
  resource.labels.instance_id
LIMIT
  1000
  1. Click the RUN button. After a couple seconds you will see each time someone deleted a virtual machine within the past 7 days. You should see a single entry, which is the activity you generated in this lab. Remember, BigQuery is only showing activity since the export was created.
  2. Delete the text in the Query_editor window and paste in the query below. This query will return the users that deleted storage buckets in the last 7 days:
#standardSQL
SELECT
  timestamp,
  resource.labels.bucket_name,
  protopayload_auditlog.authenticationInfo.principalEmail,
  protopayload_auditlog.resourceName,
  protopayload_auditlog.methodName
FROM
`auditlogs_dataset.cloudaudit_googleapis_com_activity_*`
WHERE
  PARSE_DATE('%Y%m%d', _TABLE_SUFFIX) BETWEEN
  DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY) AND
  CURRENT_DATE()
  AND resource.type = "gcs_bucket"
  AND protopayload_auditlog.methodName = "storage.buckets.delete"
ORDER BY
  timestamp,
  resource.labels.instance_id
LIMIT
  1000
  1. Click the RUN button. After a couple seconds you will see entries showing each time someone deleted a storage bucket within the past 7 days.

Task 6. Finish up

Review

In this lab, you had the chance to do the following:

  1. View audit logs in the Activity page
  2. View and filter audit logs in Stackdriver
  3. Retrieve log entries with gcloud
  4. Export audit logs

End your lab

Advertisements

Configuring an Internal Load Balancer help

Configuring an Internal Load Balancer

1 hour 30 minutesFree

Overview

GCP offers Internal Load Balancing for your TCP/UDP-based traffic. Internal Load Balancing enables you to run and scale your services behind a private load balancing IP address that is accessible only to your internal virtual machine instances.

In this lab, you create two managed instance groups in the same region. Then you configure and test an internal load balancer with the instances groups as the backends, as shown in this network diagram:

network_diagram.png

Objectives

In this lab, you learn how to perform the following tasks:

  • Create HTTP and health check firewall rules
  • Configure two instance templates
  • Create two managed instance groups
  • Configure and test an internal load balancer

What you’ll need

To complete this lab, you’ll need:

  • Access to a standard internet browser (Chrome browser recommended).
  • Time. Note the lab’s Completion time in Qwiklabs. This is an estimate of the time it should take to complete all steps. Plan your schedule so you have time to complete the lab. Once you start the lab, you will not be able to pause and return later (you begin at step 1 every time you start a lab).
  • The lab’s Access time is how long your lab resources will be available. If you finish your lab with access time still available, you will be able to explore the Google Cloud Platform or work on any section of the lab that was marked “if you have time”. Once the Access time runs out, your lab will end and all resources will terminate.
  • You DO NOT need a Google Cloud Platform account or project. An account, project and associated resources are provided to you as part of this lab.
  • If you already have your own GCP account, make sure you do not use it for this lab.
  • If your lab prompts you to log into the console, use only the student account provided to you by the lab. This prevents you from incurring charges for lab activities in your personal GCP account.

Start your lab

When you are ready, click Start Lab. You can track your lab’s progress with the status bar at the top of your screen.

Find Your Lab’s GCP Username and Password

To access the resources and console for this lab, locate the Connection Details panel in Qwiklabs. Here you will find the account ID and password for the account you will use to log in to the Google Cloud Platform:

Open Google Console

If your lab provides other resource identifiers or connection-related information, it will appear on this panel as well.

Task 1. Configure HTTP and health check firewall rules

Configure firewall rules to allow HTTP traffic to the backends and TCP traffic from the GCP health checker.

Explore the my-internal-app network

The network my-internal-app with subnet-a andsubnet-b and firewall rules for RDP, SSH, and ICMPtraffic have been configured for you.

  • In the GCP Console, on the Navigation menu (Navigation menu), click VPC network > VPC networks. Notice the my-internal-app network with its subnets: subnet-a andsubnet-b.

    Each GCP project starts with the default network. In addition, the my-internal-app network has been created for you as part of your network diagram.

    You will create the managed instance groups insubnet-a and subnet-b. Both subnets are in the us-central1 region because an internal load balancer is a regional service. The managed instance groups will be in different zones, making your service immune to zonal failures.

Create the HTTP firewall rule

Create a firewall rule to allow HTTP traffic to the backends from the load balancer and the internet (to install Apache on the backends).

  1. On the Navigation menu (Navigation menu), click VPC network >Firewall rules. Notice the app-allow-icmp and app-allow-ssh-rdp firewall rules.

    These firewall rules have been created for you.

  2. Click Create Firewall Rule.
  3. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Name app-allow-http
    Network my-internal-app
    Targets Specified target tags
    Target tags lb-backend
    Source filter IP Ranges
    Source IP ranges 0.0.0.0/0
    Protocols and ports Specified protocols and ports
  4. For tcp, specify port 80.
  1. Click Create.

Create the health check firewall rules

Health checks determine which instances of a load balancer can receive new connections. For Internal Load Balancing, the health check probes to your load-balanced instances come from addresses in the ranges 130.211.0.0/22 and 35.191.0.0/16. Your firewall rules must allow these connections.

  1. Return to the Firewall rules page.
  2. Click Create Firewall Rule.
  3. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Name app-allow-health-check
    Network my-internal-app
    Targets Specified target tags
    Target tags lb-backend
    Source filter IP Ranges
    Source IP ranges 130.211.0.0/22 35.191.0.0/16
    Protocols and ports Specified protocols and ports
  4. For tcp, specify all ports.
  1. Click Create.

Click Check my progress to verify the objective.

Configure HTTP and health check firewall rules

Check my progress

Task 2. Configure instance templates and create instance groups

A managed instance group uses an instance template to create a group of identical instances. Use these to create the backends of the internal load balancer.

Configure the instance templates

An instance template is an API resource that you can use to create VM instances and managed instance groups. Instance templates define the machine type, boot disk image, subnet, labels, and other instance properties. Create an instance template for both subnets of the my-internal-app network.

  1. On the Navigation menu (Navigation menu), click Compute Engine >Instance templates.
  2. Click Create instance template.
  3. For Name, type instance-template-1
  4. Click Management, security, disks, networking, sole tenancy.
  5. Click Management.
  6. Under Metadata, specify the following:
    Key Value
    startup-script-url gs://cloud-training/gcpnet/ilb/startup.sh
  1. Click Networking.
  2. For Network interfaces, specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Network my-internal-app
    Subnetwork subnet-a
    Network tags lb-backend
  1. Click Create. Wait for the instance template to be created.

    Create another instance template for subnet-b by copying instance-template-1:

  2. Select the instance-template-1 and click Copy.
  3. Click Management, security, disks, networking, sole tenancy.
  4. Click Networking.
  5. For Network interfaces, select subnet-b as theSubnetwork.
  6. Click Create.

Click Check my progress to verify the objective.

Configure instance templates

Check my progress

Create the managed instance groups

Create a managed instance group in subnet-a (us-central1-a) and subnet-b (us-central1-b).

  1. On the Navigation menu (Navigation menu), click Compute Engine >Instance groups.
  2. Click Create Instance group.
  3. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Name instance-group-1
    Location Single-zone
    Region us-central1
    Zone us-central1-a
    Group type Managed instance group
    Instance template instance-template-1
    Autoscaling policy CPU usage
    Target CPU usage 80
    Minimum number of instances 1
    Maximum number of instances 5
    Cool-down period 45
  1. Click Create.

    Repeat the same procedure for instance-group-2 inus-central1-b:

  2. Click Create Instance group.
  3. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Name instance-group-2
    Location Single-zone
    Region us-central1
    Zone us-central1-b
    Group type Managed instance group
    Instance template instance-template-2
    Autoscaling policy CPU usage
    Target CPU usage 80
    Minimum number of instances 1
    Maximum number of instances 5
    Cool-down period 45
  4. Click Create.

Verify the backends

Verify that VM instances are being created in both subnets and create a utility VM to access the backends’ HTTP sites.

  1. On the Navigation menu, click Compute Engine > VM instances. Notice two instances that start withinstance-group-1 and instance-group-2.

    These instances are in separate zones, and their internal IP addresses are part of the subnet-a andsubnet-b CIDR blocks.

  2. Click Create Instance.
  3. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Name utility-vm
    Region us-central1
    Zone us-central1-f
    Machine type micro (1 shared vCPU)
  4. Click Management, security, disks, networking, sole tenancy.
  5. Click Networking.
  6. For Network interfaces, click the pencil icon to edit.
  7. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Network my-internal-app
    Subnetwork subnet-a
    Primary internal IP Ephemeral (Custom)
    Custom ephemeral IP address 10.10.20.50
  8. Click Done.
  9. Click Create.
  10. Note that the internal IP addresses for the backends are 10.10.20.2 and 10.10.30.2.
  1. For utility-vm, click SSH to launch a terminal and connect.
  2. To verify the welcome page for instance-group-1-xxxx, run the following command:
curl 10.10.20.2

The output should look like this (do not copy; this is example output):

<h1>Internal Load Balancing Lab</h1><h2>Client IP</h2>Your IP address : 10.10.20.50<h2>Hostname</h2>Server Hostname:
 instance-group-1-1zn8<h2>Server Location</h2>Region and Zone: us-central1-a
  1. To verify the welcome page for instance-group-2-xxxx, run the following command:
curl 10.10.30.2

The output should look like this (do not copy; this is example output):

<h1>Internal Load Balancing Lab</h1><h2>Client IP</h2>Your IP address : 10.10.20.50<h2>Hostname</h2>Server Hostname:
 instance-group-2-q5wp<h2>Server Location</h2>Region and Zone: us-central1-b
Which of these fields identify the location of the backend?

Server Hostname

Client IP

Server Location

Submit

  1. Close the SSH terminal to utility-vm:
exit

Task 3. Configure the internal load balancer

Configure the internal load balancer to balance traffic between the two backends (instance-group-1 in us-central1-a and instance-group-2 in us-central1-b), as illustrated in the network diagram:

network_diagram.png

Start the configuration

  1. In the GCP Console, on the Navigation menu (Navigation menu), clickNetwork Services > Load balancing.
  2. Click Create load balancer.
  3. Under TCP Load Balancing, click Start configuration.
  4. For Internet facing or internal only, select Only between my VMs.
  1. Click Continue.
  2. For Name, type my-ilb.

Configure the regional backend service

The backend service monitors instance groups and prevents them from exceeding configured usage.

  1. Click Backend configuration.
  2. Specify the following, and leave the remaining settings as their defaults:
    Property Value (select option as specified)
    Region us-central1
    Network my-internal-app
    Instance group instance-group-1 (us-central1-a)
  3. Click Done.
  4. Click Add backend.
  5. For Instance group, select instance-group-2 (us-central1-b).
  6. Click Done.
  7. For Health Check, select Create a health check.
  8. Specify the following, and leave the remaining settings as their defaults:
    Property Value (select option as specified)
    Name my-ilb-health-check
    Protocol TCP
    Port 80
  1. Click Save and Continue.
  2. Verify that there is a blue check mark next to Backend configuration in the GCP Console. If there isn’t, double-check that you have completed all the steps above.

Configure the frontend

The frontend forwards traffic to the backend.

  1. Click Frontend configuration.
  2. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Subnetwork subnet-b
    Internal IP Reserve a static internal IP address
  3. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Name my-ilb-ip
    Static IP address Let me choose
    Custom IP address 10.10.30.5
  4. Click Reserve.
  5. For Ports, type 80.
  6. Click Done.

Review and create the internal load balancer

  1. Click Review and finalize.
  2. Review the Backend and Frontend.
  3. Click Create. Wait for the load balancer to be created before moving to the next task.

Click Check my progress to verify the objective.

Configure the Internal Load Balancer

Check my progress

Task 4. Test the internal load balancer

Verify that the my-ilb IP address forwards traffic toinstance-group-1 in us-central1-a and instance-group-2 in us-central1-b.

Access the internal load balancer

  1. On the Navigation menu, click Compute Engine > VM instances.
  2. For utility-vm, click SSH to launch a terminal and connect.
  3. To verify that the internal load balancer forwards traffic, run the following command:
curl 10.10.30.5

The output should look like this (do not copy; this is example output):

<h1>Internal Load Balancing Lab</h1><h2>Client IP</h2>Your IP address : 10.10.20.50<h2>Hostname</h2>Server Hostname:
 instance-group-1-1zn8<h2>Server Location</h2>Region and Zone: us-central1-a
  1. Run the same command a couple of times:
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5

You should be able to see responses from instance-group-1 in us-central1-a and instance-group-2 in us-central1-b. If not, run the command again.

KVM Bridging and Bonding

[root@MTEST ~]# cat /etc/sysconfig/network-scripts/ifcfg-
ifcfg-bond0 ifcfg-br0 ifcfg-enP1p1s0f2 ifcfg-enP2p1s0f2 ifcfg-lo
[root@MTEST ~]# ls -l /etc/sysconfig/network-scripts/ifcfg-bond0
-rw-r–r–. 1 root root 85 Nov 7 2016 /etc/sysconfig/network-scripts/ifcfg-bond0
[root@MTEST ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
BONDING_OPTS=’mode=1 miimon=100′
BRIDGE=br0
NM_CONTROLLED=no

[root@MTEST ~]# cat /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE=br0
ONBOOT=yes
TYPE=Bridge
IPADDR=10.81.189.51
NETMASK=255.255.255.192
GATEWAY=10.81.189.1
NM_CONTROLLED=no

[root@MTEST ~]# cat /etc/sysconfig/network-scripts/ifcfg-enP1p1s0f2
TYPE=Ethernet
BOOTPROTO=no
NAME=enP1p1s0f2
UUID=22a0a65e-1212-48f5-ae4e-d5491a867657
DEVICE=enP1p1s0f2
ONBOOT=yes
MASTER=bond0
SLAVE=yes
HWADDR=98:be:94:68:9e:a2

 

[root@MTEST ~]# cat /etc/sysconfig/network-scripts/ifcfg-enP2p1s0f2
TYPE=Ethernet
BOOTPROTO=no
NAME=enP2p1s0f2
UUID=bcd70f87-19d8-4f3d-914c-b08250866d69
DEVICE=enP2p1s0f2
ONBOOT=yes
MASTER=bond0
SLAVE=yes
HWADDR=98:be:94:68:8f:7e
https://docs.fedoraproject.org/en-US/Fedora/24/html/Networking_Guide/sec-Network_Bridge_with_Bond.html
https://docs.solusvm.com/display/DOCS/KVM+Bridge+Setup

 

 

Deployment Manager: Full Production + (Stackdriver)

Deployment Manager: Full Production + (Stackdriver)

1 hour 30 minutes1 Credit

Overview

In this lab you will…

  1. Install an advanced deployment using Deployment Manager sample templates.
  2. Enable Stackdriver monitoring.
  3. Configure Stackdriver Uptime Checks and Notifications.
  4. Configure a Stackdriver Dashboard with two charts, showing CPU and Received packets.
  5. Perform a load test and simulate a service outage.

Objectives

In this lab, you will learn to:

  • Launch a cloud service from a collection of templates.
  • Configure basic black box monitoring of an application.
  • Create an uptime check to recognize a loss of service.
  • Establish an alerting policy to trigger incident response procedures.
  • Create and configure a dashboard with dynamically update charts.
  • Test the monitoring and alerting regimen by applying a load to the service.
  • Test the monitoring and alerting regimen by simulating a service outage.

Full Production

Mechanically, the actions you will perform in this lab are almost identical to the previous lab, with a few differences. But look at the deployment you are launching:

This deployment and the sample templates employ many of the best practices you have learned about throughout this class. It doesn’t build just a single load balancer, but two of them. One for static content and another for dynamic content. Separating the workloads in this way allows the static and dynamic content to scale independently, making for a more cost optimized solution.

Both the dynamic and the static content are served by TWO managed instance groups so that if one group is lost, the other can resume. So this is a highly reliable and resilient design.

Finally, the system state information is pushed into a back-end Cloud SQL server. Do you know why there isn’t a separate Cloud SQL server in a different zone performing replication?

Because the number of servers you are starting will reach the 8-server quota limit established in the Qwiklabs lab accounts!

Oh — and this is a Logbook application.

Clone the Deployment Manager sample templates

Google provides a robust set of sample Deployment Manager templates that you can learn from and build upon.

Clone the repo.

Open Cloud Shell.

In Cloud Shell Command Line, create a directory to hold the Deployment Manager sample templates and change into that directory.

mkdir ~/dmsamples
cd ~/dmsamples

Clone the repository.

git clone https://github.com/GoogleCloudPlatform/deploymentmanager-samples.git

Refresh the Cloud Shell Code Editor to view the files. Click File > Refresh in the Code Editor to see the directory.

About the tutorials

There are several step-by-step tutorials in the documentation. They are located here:

https://cloud.google.com/deployment-manager/docs/tutorials

This lab is based on one of those tutorials, but includes additional content. The Deployment Manager template you will use generates an advanced HTTP(S) Load Balanced deployment for a Logbook sample application. The samples are available in both Jinja and Python. This lab uses the Python version of the templates.

https://cloud.google.com/deployment-manager/docs/create-advanced-http-load-balanced-deployment

Explore the sample files

There are many sample templates in the directory.

List the example templates.

Locate the version 2 examples and list them.

cd ~/dmsamples/deploymentmanager-samples/examples/v2
ls

You should see something like this:

Example (don’t copy)

Not all of the subdirectories are independent projects. For example, the directory named common contains templates that are used by several of the other projects. If you are studying independently later, use the README files as a guide.

The application you will build in this lab is contained in the nodejs_l7 directory. Note that there is a nodejsdirectory and a nodejs_l7 directory, you will use the one with L7.

L7 means network layer 7 load balancing.

https://cloud.google.com/deployment-manager/docs/create-advanced-http-load-balanced-deployment

https://cloud.google.com/deployment-manager/images/http-load-balanced-diagram.svg

List and examine the NodeJS_l7 deployment.

Locate the version 2 examples and list them.

cd nodejs_l7/python
ls

You should see something like this:

Example (don’t copy)

  • application.py
  • application.py.schema
  • Unifies the frontend and backend and defines additional resources.
  • A static service with primary and secondary managed instance groups that serves a static webpage.
  • A URL Map resource that maps different URLs to their correct paths, default or static.
  • A global forwarding rule that provides a single external IP address.
  • A firewall rule that allows traffic through port 8080.
  • service.py
  • service.py.schema
  • Creates the application frontend.
  • Creates two managed instance groups, one primary and one secondary, using the autoscaled_group.py template
  • Creates the backend service including health checker.
  • autoscaled_group.py
  • autoscaled_group.py.schema
  • Creates an autoscaled managed instance group using the common container_instance_template.py

The above application-specific templates make use of several common templates that are used with other deployments.

  • /common/python/container_instance_template.py
  • /common/python/container_vm.py
  • /common/python/container_helper.py

Customize the deployment

Google provides a robust set of sample Deployment Manager templates that you can learn from and build upon.

Specify the zone and secondary zone.

The application.yaml file requires a primary zone and a secondary zone.

You can find the list of zones in Cloud Shell Command Line by entering:

gcloud compute zones list

Use Cloud Shell Code Editor to edit application.yaml in ~/dmsamples/deploymentmanager-samples/examples/v2/nodejs_l7/python

and replace ZONE_TO_RUN and SECOND_ZONE_TO_RUN with zones of your choosing.

application.yaml

resources:
- name: nodejs
  type: application.py
  properties:
    primaryZone: ZONE_TO_RUN
    secondaryZone: SECOND_ZONE_TO_RUN
    backendImage: gcr.io/deployment-manager-examples/mysql
    frontendImage: gcr.io/deployment-manager-examples/nodejsservice
    staticImage: gcr.io/deployment-manager-examples/nodejsservicestatic

Run the application

The application will not be operational until several steps are completed. First, you will use Deployment Manager to deploy the application. That will build the infrastructure but it won’t initially allow traffic to begin. After the infrastructure is setup, you will apply service labels.

Deploy the application

  1. Name the application and pass Deployment Manager the configuration file.
gcloud deployment-manager deployments create advanced-configuration --config application.yaml

Verify that the application is open for traffic

  • On the Products & Services menu, click VPC Networks > Firewall rules
  • The Deployment Manager template should have already created a firewall rule to allow traffic from TCP 8080.

If you needed to create a firewall rule:

  • Create a firewall rule that opens tcp:8080 for the service.
  • On the Products & Services menu, click VPC Networks > Firewall rules
  • Click on Create Firewall Rule and specify the following:
Property Value

(type value or select option as specified)

Name allow-8080
Description
Network default
Priority 1000
Direction of traffic Ingress
Action on match allow
Targets all instances in network
Source filter IP ranges
Source IP ranges 0.0.0.0/0
Protocols and ports specified protocols and ports

tcp:8080

Click Create.

Enable health checks for the Instance Groups

Set named ports (service labels)

Service labels are metadata used by the load balancing service to group resources. Named ports are key:value pairs metadata representing the service name and the port that it’s running on. Named ports can be assigned to an instance group, which indicates that the service is available on all instances in the group. This information is used by the HTTP Load Balancing service.

Set named ports for the primary instance group.

gcloud compute instance-groups unmanaged set-named-ports advanced-configuration-frontend-pri-igm  --named-ports http:8080,httpstatic:8080  --zone [PRIMARY_ZONE]

Set named ports for the secondary instance group.

gcloud compute instance-groups unmanaged set-named-ports advanced-configuration-frontend-sec-igm --named-ports http:8080,httpstatic:8080  --zone [SECONDARY_ZONE]

https://cloud.google.com/compute/docs/instance-groups/#specifying_service_endpoints

https://cloud.google.com/sdk/gcloud/reference/compute/instance-groups/set-named-ports

Enable the HTTP load balancer to verify the health of the servers.

There is no firewall rule to allow health checks from the load balancer to the instances.

When a health check is used with Network load balancing, the health check probes come from addresses in the ranges 209.85.152.0/22, 209.85.204.0/22, and 35.191.0.0/16. When a health check is used with HTTP(S), SSL proxy, TCP proxy, or Internal load balancing, the health check probes come from addresses in the ranges 130.211.0.0/22 and 35.191.0.0/16.

https://cloud.google.com/compute/docs/load-balancing/health-checks

Since this is an HTTP(S) load balancer, you will need to enable TCP traffic from 130.211.0.0/22 and 35.191.0.0/16 to instances with the tag http.

  1. Create a firewall rule to allow TCP traffic to the test server.
  2. On the Products & Services menu, click VPC Networks> Firewall rules
  3. Click on Create Firewall Rule and specify the following:
Property Value

(type value or select option as specified)

Name allow-healthchecks
Description
Network default
Priority 1000
Direction of traffic Ingress
Action on match allow
Targets Specified target tags
Target tags http
Source Filter IP ranges
Source IP Ranges 130.211.0.0/22

35.191.0.0/16

Protocols and ports specified protocols and ports

tcp

Click Create.

Verify that the application is operational

The application takes a few minutes to start. You can view it in the Deployment Manager part of Console or you can see the instances in the Compute Engine part of Console. The application is accessible on port 8080 at a global IP address. Unfortunately, The IP address was established dynamically when the global forwarding rule was implemented by the Deployment Manager templates, so you don’t know the application’s IP address. And you will need that address to test the application.

Find the global load balancer forwarding rule IP address

Find the Forwarding IP address.

gcloud compute forwarding-rules list 

Open a browser and view port 8080

http://<your forwarding IP>:8080

It may take several minutes for the service to become operational. If you get an error, such as a 404, wait about two minutes and try again. When you get a blank page, you may proceed to enter log information and view it.

Create several log entries by calling this repeatedly with different messages.

http://<your forwarding IP>:8080/?msg=enter-a-message

View the log entries:

http://<your forwarding IP>:8080/

Enable monitoring for the project

Now that the application is running, you will setup Stackdriver Alerts and some Stackdriver dashboards.

Configure Stackdriver services for your GCP project

  1. Click on Products & Services menu, and click onMonitoring.
  2. Stackdriver will open in a new tab or window.
  3. If requested, agree to the 30 day free trial by clickingCreate Account.
  4. If requested, agree to monitor the GCP project you are using by clicking Continue.
  5. If requested, skip the configuration of AWS accounts by clicking Skip AWS Setup.
  6. Click Continue to configure Stackdriver agents.
  7. On the Get Reports by Email page, select No reports, and then click Continue.
  8. Click Launch monitoring.
  9. Click Continue with the trial.

Configure an uptime check and alert policy for the application.

Now that the application is running, you will setup Stackdriver Alerts and some Stackdriver dashboards.

Configure an Uptime Check

  1. On the Stackdriver window or tab, click on Uptime Checks menu, click Uptime Checks Overview. Click onAdd Uptime Check.
  2. Specify the following:
Property Value

(type value or select option as specified)

Title < leave blank to allow the default name >
Check Type HTTP
Resource Type URL
Hostname < your forwarding IP >
Path < leave blank >
Check Every 1 minute
  1. Click Advanced Options and specify the following, leaving the remaining settings at the default values.
Port 8080
Locations Global
  1. Click Test. If the test fails, make sure that the service is still working. Also check to see that the firewall rule exists and is correct. If the test succeeds, click Save.
  2. After the Uptime Check is saved, Stackdriver will offer to create an alerting policy.

Configure an Alerting Policy and Notification

  1. Click on Create Alerting Policy.
  2. Click on Add Notification. And add your email. Later in the lab when the service is not operational, you will receive an email notification.
  3. Click on Save Policy.

Configure a dashboard with a couple of useful charts.

Configure a Dashboard

  • On the Stackdriver window or tab, click onDashboards > Create Dashboard.
  • Click on the Untitled Dashboard and give it a name like ArchDP Dash.
  • Click Add Chart.
Property Value

(type value or select option as specified)

Title < leave blank to allow the default name >
Resource Type GCE VM Instance
Metric Type CPU usage
  • Click Save.
  • Click Add Chart and add another chart to the dashboard with the following properties:
Property Value

(type value or select option as specified)

Title <allow the default name>
Resource Type GCE VM Instance
Metric Type Received packets
  • Click Save.

Place a test load on the application.

Create a test application in Cloud Shell.

In Console, return to Cloud shell or open Cloud Shell if necessary.

Using the Cloud Shell Code Editor, create the test-monitor1.sh file in ~/dmsamples/deploymentmanager-samples/examples/v2/nodejs_l7/python/test-monitor1.sh

Replace <your forwarding IP> with the forwarding IP address of the application.

test-monitor1.sh

#!/bin/bash
for ((c=1; c<=250; c++))
do
   echo "$c"
   curl -s "http://<your forwarding IP>:8080/"
done

Using the Cloud Shell, make the script executable.

chmod +x test-monitor1.sh

Using the Cloud Shell, run the script several times.

./test-monitor1.sh

View the results on the Dashboard in Stackdriver.

The minimum dashboard timeline is 1 hour. You should see the Received packets graph trend upwards after a few minutes.

This bash script running on Cloud Shell is not sufficient to drive autoscaling. A load testing application will be required.

Create a test VM with Apache Bench

With the number of VMs launched, you have probably reached the Qwiklabs quota limit. Instead of installing Apache Bench on a VM, as you would normally do in a production environment, just install and use it from the Cloud Shell Command Line

Install Apache Bench on Cloud Shell

  1. Open Cloud Shell
cd
sudo apt-get update
sudo apt-get -y install apache2-utils
  1. Use Apache Bench to apply load to the service.
ab -n 10000 -c 100 http://<your forwarding IP>:8080/
  1. Run the above command two or three times.
  2. View the results on the Dashboard in Stackdriver.
  3. You can also view the instance groups in Console to see if autoscaling has been triggered. On the Products & services menu, click Compute Engine > Instance groups. Your results may look similar to the image below:

Simulate a service outage.

Remove the firewall to simulate an outage.

  1. On the Products & services menu, click VPC Networks> Firewall rules.
  2. As shown in the image below, select the firewall rule that is allowing TCP 8080 traffic, and click Delete.

  1. In a few minutes you should receive a notification email. The notification latency setting determines how long after a policy is triggered before a notification is sent.

After you receive the notification you can proceed.

Full Production

You just built a full-featured full-size highly available, reliable, scalable, and resilient service. And it probably didn’t feel significantly different from the much smaller services you built in the previous two labs. That’s the magic of Deployment Manager!!

Cleanup

  1. In the Cloud Platform Console, sign out of the Google account.
  2. Close the browser tab.