Configuring and Viewing Audit Logs in Stackdriver

n this lab, you investigate Google Cloud Audit Logging. Cloud Audit Logging maintains two audit logs for each project and organization: Admin Activity and Data Access. Google Cloud Platform services write audit log entries to these logs to help you answer the questions of “who did what, where, and when?” within your Google Cloud Platform projects.

Objectives

 

Task 1. Enabling data access audit logs

Data Access audit logs (except for BigQuery) are disabled by default. You will first enable all audit logs. Logging charges for the volume of log data that exceeds the free monthly logs allotment. All logs received by Logging count towards the logs allotment limit, except for the Cloud Audit Logging logs that are enabled by default. This includes all GCP Admin Activity audit logs, System Event logs plus GCP Data Access audit logs from Google BigQuery only.

  1. On the Google Cloud Platform menu, click Activate Google Cloud Shell ( d267ae9c838005ec.png)to open Cloud Shell. If prompted, click Start Cloud Shell.
  2. Once Cloud Shell is fully open, click on the pencil icon to open the Cloud Shell code editor and Cloud Shell SSH interface in a new tab.
  3. In Cloud Shell, run the following command to retrieve the current IAM policy for your project and save it aspolicy.json:
gcloud projects get-iam-policy $DEVSHELL_PROJECT_ID \
--format=json >./policy.json
  1. In the Navigator pane, click on the policy.json file to open it in the editor.
  2. Add the following text to the policy.json file to enable data Access audit logs for all services. This text should be added just after the first { and before"bindings": [ (Be careful not to change anything else in the file).
   "auditConfigs": [
      {
         "service": "allServices",
         "auditLogConfigs": [
            { "logType": "ADMIN_READ" },
            { "logType": "DATA_READ"  },
            { "logType": "DATA_WRITE" }
         ]
      },
   ],

The file will look similar to below:

c721e1b6287e7566.png

  1. In Cloud Shell, run the following command to set the IAM policy:
gcloud projects set-iam-policy $DEVSHELL_PROJECT_ID \
./policy.json

The command will return and display the new IAM policy.

Task 2. Generating some account activity

In Cloud Shell, run the following commands to create a few resources. This will generate some activity that you will view in the audit logs.

gsutil mb gs://$DEVSHELL_PROJECT_ID
echo "this is a sample file" > sample.txt
gsutil cp sample.txt gs://$DEVSHELL_PROJECT_ID
gcloud compute networks create mynetwork --subnet-mode=auto
gcloud compute instances create default-us-vm \
--zone=us-central1-a --network=mynetwork
gsutil rm -r gs://$DEVSHELL_PROJECT_ID

Task 3. Viewing Admin Activity logs

Admin Activity logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources. For example, the logs record when VM instances and App Engine applications are created and when permissions are changed. To view the logs, you must have the Cloud Identity and Access Management roles Logging/Logs Viewer or Project/Viewer.

Admin Activity logs are always enabled so there is no need to enable them. There is no charge for your Admin Activity audit logs.

Using the Activity page

You can view abbreviated audit log entries in your project’s Activity page in the GCP Console. The actual audit log entries might contain more information than you see in the Activity page. The Activity page is good for a quick check of account activity.

  1. Switching to the browser tab showing the GCP console, select Navigation menu > Home.
  2. Click on the ACTIVITY button near the top left.

At the top of the activity table, you will see the activity you just generated:

6f38d7067538c77.png

  1. If you do not see the activity, reload the page.
  2. If the Filter pane is not displayed on the right, click theFilter button on the top right.
  3. In the filter pane, click on the Activity types, clickSelect all, and click OK.
  4. In the filter pane, click on the Resource type, UncheckSelect all checkbox, select GCE Network, and click OK. The Activity table now only shows the network that was created at the start of the lab.
  5. Feel free to explore other filters to help locate specific events. Filters can help locate events or to verify which events occurred.

Using the Stackdriver Logging page

  1. From the GCP console, select Navigation menu >Stackdriver > Logging.
  2. Click the down arrow in the Filter by label or text search field and select Convert to advanced filter.
  3. Delete the contents of the advanced filter field.
  4. Paste the following in the advanced filter field and replace PROJECT_ID with your project ID. You can copy the PROJECT_ID from the Qwiklabs Connection Details:
logName = ("projects/[PROJECT_ID]/logs/cloudaudit.googleapis.com%2Factivity")
  1. Press the Submit Filter button.
  2. Configure View Options to Show newest logs first.
  3. Locate the log entry for when the cloud storage was deleted:

fbb0990203774ae6.png

  1. Within that entry, click on the Cloud Storage text and select Show matching entries.
  2. Notice a line was added to the advanced filter to show only storage events:

a0f0a1e33810315d.png

You should now see only the cloud storage entries.

  1. Within the entry, click on the delete text and selectShow matching entries.
  2. Notice another line was added to the advanced filter and now you can only see storage delete entries.

This technique can be used to easily locate desired events.

  1. Expand the Cloud Storage delete entry and then expand the protoPayload field.
  2. Expand the authenticationInfo field and notice you can see the email address of the user that performed this action.
  3. Feel free to explore other fields in the entry.

Using the Cloud SDK

Log entries can also be read using the Cloud SDK command:

Example (Do not copy):

gcloud logging read [FILTER]
  1. Switch to the browser with the Cloud shell.
  2. Use the following command to retrieve just the audit activity for when storage buckets were deleted:
gcloud logging read \
"logName=projects/$DEVSHELL_PROJECT_ID/logs/cloudaudit.googleapis.com%2Factivity \
AND protoPayload.serviceName=storage.googleapis.com \
AND protoPayload.methodName=storage.buckets.delete"

Task 4. Exporting Audit logs

Audit log retention

Individual audit log entries are kept for a specified length of time and are then deleted. The Stackdriver Logging Quota Policy explains how long log entries are retained. You cannot otherwise delete or modify audit logs or their entries.

Audit log type Retention period
Admin Activity 400 days
Data Access 30 days

For longer retention, you can export audit log entries like any other Stackdriver Logging log entries and keep them for as long as you wish.

Export audit logs

When exporting logs, the current filter will be applied to what is exported.

  1. From the Stackdriver Logging dashboard, set the filter to display all the audit logs by deleting all lines in the filter except the first one. Your filter will look like (your project ID will be different):
logName = ("projects/[PROJECT_ID]/logs/cloudaudit.googleapis.com%2Factivity")
  1. Press the Submit Filter button.
  2. Click the CREATE EXPORT button.
  3. Provide a Sink Name of AuditLogsExport.
  4. Set the Sink service to BigQuery.
  5. Set the Sink destination to Create new BigQuery dataset, name the dataset auditlogs_dataset, and click the CREATE button.
  6. Click the Create sink button.
  7. Read the message in the Sink created dialog and clickCLOSE.
  8. On the left side of the Stackdriver dashboard, click on the Exports option. This allows for export to be viewed or edited. You will see the export you just created.
  9. On the right side, click the button with three dots 4afb81217e3ca687.pngfor your export and select View Filter.

This will show the filter that was present when the export was created.

  1. Click CLOSE when done.
  1. In Cloud Shell, run the following commands to generate some more activity that you will view in the audit logs exported to BigQuery:
gsutil mb gs://$DEVSHELL_PROJECT_ID
gsutil mb gs://$DEVSHELL_PROJECT_ID-test
echo "this is another sample file" > sample2.txt
gsutil cp sample.txt gs://$DEVSHELL_PROJECT_ID-test
gcloud compute instances delete --zone=us-central1-a \
--delete-disks=all default-us-vm
gsutil rm -r gs://$DEVSHELL_PROJECT_ID
gsutil rm -r gs://$DEVSHELL_PROJECT_ID-test

Task 5. Using BigQuery to analyze logs

  1. Go to Navigation menu > BigQuery. If prompted, log in with the Qwiklabs-provided credentials.
  2. The Welcome to BigQuery in the Cloud Consolemessage box opens. This message box provides a link to the quickstart guide and lists UI updates.
  3. Click Done.
  4. In the left pane in the Resources section, click your project (this starts with qwiklabs-gcp-xxx) you should see an auditlogs_dataset dataset under it.
  5. Verify that the BigQuery dataset has appropriate permissions to allow the export writer to store log entries. Click on the auditlogs_dataset dataset, then click on the Share dataset. On the Dataset Permission page you will see the service account as Bigquery Data Editor member. If it’s not already listed, you can add service account under Add members and grant data editor role.

permission.png

  1. Click the Cancel button to close the Share Datasetscreen.
  2. Expand the dataset to see the table with your exported logs (click on dataset name to expand).
  3. Click on the table name and take a moment to review the schemas and details of the tables that are being used.
  4. Click the Query Table button.
  5. Delete the text provided in the Query editor window and paste in the query below. This query will return the users that deleted virtual machines in the last 7 days:
#standardSQL
SELECT
  timestamp,
  resource.labels.instance_id,
  protopayload_auditlog.authenticationInfo.principalEmail,
  protopayload_auditlog.resourceName,
  protopayload_auditlog.methodName
FROM
`auditlogs_dataset.cloudaudit_googleapis_com_activity_*`
WHERE
  PARSE_DATE('%Y%m%d', _TABLE_SUFFIX) BETWEEN
  DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY) AND
  CURRENT_DATE()
  AND resource.type = "gce_instance"
  AND operation.first IS TRUE
  AND protopayload_auditlog.methodName = "v1.compute.instances.delete"
ORDER BY
  timestamp,
  resource.labels.instance_id
LIMIT
  1000
  1. Click the RUN button. After a couple seconds you will see each time someone deleted a virtual machine within the past 7 days. You should see a single entry, which is the activity you generated in this lab. Remember, BigQuery is only showing activity since the export was created.
  2. Delete the text in the Query_editor window and paste in the query below. This query will return the users that deleted storage buckets in the last 7 days:
#standardSQL
SELECT
  timestamp,
  resource.labels.bucket_name,
  protopayload_auditlog.authenticationInfo.principalEmail,
  protopayload_auditlog.resourceName,
  protopayload_auditlog.methodName
FROM
`auditlogs_dataset.cloudaudit_googleapis_com_activity_*`
WHERE
  PARSE_DATE('%Y%m%d', _TABLE_SUFFIX) BETWEEN
  DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY) AND
  CURRENT_DATE()
  AND resource.type = "gcs_bucket"
  AND protopayload_auditlog.methodName = "storage.buckets.delete"
ORDER BY
  timestamp,
  resource.labels.instance_id
LIMIT
  1000
  1. Click the RUN button. After a couple seconds you will see entries showing each time someone deleted a storage bucket within the past 7 days.

Task 6. Finish up

Review

In this lab, you had the chance to do the following:

  1. View audit logs in the Activity page
  2. View and filter audit logs in Stackdriver
  3. Retrieve log entries with gcloud
  4. Export audit logs

End your lab

Advertisements

Configuring and using Stackdriver logging and monitoring

Objectives

In this lab, you will learn how to:

  • View logs using a variety of filtering mechanisms
  • Exclude log entries and disable log ingestion
  • Export logs and run reports against exported logs
  • Create and report on logging metrics
  • Create a Stackdriver account used to monitor several GCP projects
  • Create a metrics dashboard

Task 1. Set up resources in your first project

In the Qwiklabs Connection Details section, you will see three projects listed. The first and second projects will contain active GCP resources which will generate logs and monitoring metric data. The third project will contain your Stackdriver account configuration data. Your Connection Details will look similar to below, but will display actual values:

b31b0722eb10f875.png

In this step, you’ll create the GCP resources for the first project.

  1. Locate the ID of the GCP Project ID 1 displayed in the Qwiklabs Connection details section. You will need this project ID shortly.
  2. In the browser displaying the GCP console, open the resource manager page in the Google Cloud Console (Navigation menu > IAM & admin > Manage resources).
  3. Click on the project with the ID that matches Project ID 1 displayed first in the Qwiklabs Connection Detailssection.
  4. On the Google Cloud Platform menu, click Activate Cloud Shell (d267ae9c838005ec.png) to open Cloud Shell. If prompted, click Start Cloud Shell.
  5. Once Cloud Shell is fully open, click on the pencil iconto open the Cloud Shell code editor and Cloud Shell SSH interface in a new tab.
  6. In Cloud Shell, download and unpack an archive that contains setup code:
curl https://storage.googleapis.com/cloud-training/gcpsec/labs/stackdriver-lab.tgz | tar -zxf -
  1. Run the setup script to create resources:
cd stackdriver-lab
./setup.sh

The created resources will include:

  • Service accounts (for use by VMs)
  • Role assignments (granting service accounts permissions to write to Stackdriver)
  • A Linux VM with Apache and the Stackdriver agents installed
  • A Windows VM with the Stackdriver agents installed
  • A Kubernetes cluster with an Nginx deployment
  • A Pubsub Topic and Subscription
  1. Once the setup script is done, run the activity generation script to create a background load on your Web servers and PubSub topic.
./activity.sh

Click Check my progress to verify the objective.

Set up resources in the first project

Check my progress

Task 2. Set up resources in your second project

  1. Locate the ID of the GCP Project ID 2 displayed in the Qwiklabs Connection Details section. You will need this project ID in the next step.
  2. You will now repeat the steps you used in Task 1, this time to set up resources in the second project:
  3. Switch Cloud Shell to focus on project 2:
gcloud config set project <project-ID-2>
  1. Run the setup script to create resources in project 2:
cd ~/stackdriver-lab
./setup.sh
  1. Run the script to generate some activity on project 2:
./activity.sh
  1. Switch the Cloud Shell back to project 1:
gcloud config set project <project-1-ID>

Click Check my progress to verify the objective.

Set up resources in the second project

Check my progress

Task 3. Log viewing and filtering in first project

See which services are writing logs

  1. Return to the browser tab displaying the Google Cloud Console.
  2. Verify it is still displaying project 1 (the project ID is displayed at the top of the console, it should match the GCP Project ID 1 in the Qwiklabs Connection Details).
  3. Go to Stackdriver Logging (Navigation menu >Logging > Logs).
  4. View the range of GCP services writing logs into Stackdriver by clicking on the first drop-down in the details pane of the window.

View VM instance logs with simple filtering

  1. From the drop-down, select GCE VM Instance > All instance_id. Note that these logs are from all the VMs in the project.
  2. To get a sense of possible breach attempts on your Web servers, set up your log viewer with the following settings:
  3. Enter 403 into the filter field at the top of the window.
  4. Place newest entries at the top by clicking on View Options and selecting Show newest logs first.
  5. Turn on streaming logs by clicking on a32b87b7f482ee3f.png.

You should see new log entries showing up every 1-2 seconds as the background activity is generating unauthorized requests against your Web servers.

  1. To get a sense of overall web activity on any Linux Apache servers…
  2. Stop log streaming by clicking on Stop streaming logsbutton. c7b26ca6273a8442.png.
  3. Remove the 403 filter by clicking the X next to it.
  4. Switch to viewing just the Apache access logs by clicking on All logs and selecting apache-access. Note that these entries include requests with 200, 403, and 404 responses.
  5. To get a sense of general system activity on a given Linux server…
  6. Switch from GCE VM Instance to GCE VM Instance, linux-server-***
  7. Switch from All logs to syslog.

Note that you can also control log entry display by selecting log levels, log time windows, etc.

Task 4. Using Log Exports

Stackdriver Logging retains log entries for 30 days. In most circumstances, you’ll want to retain some log entries for an extended time (and possibly perform sophisticated reporting on the archived logs).

GCP provides a mechanism to have all log entries ingested into Stackdriver also written to one or more archival “sinks.” In this task you will configure and test log exports to BigQuery.

Configure the export to BigQuery

  1. Go to Stackdriver Logging Exports (Navigation menu >Logging > Exports).
  2. Click Create Export.
  3. Reset the viewer to the following settings:
  4. Verify there are no filters set. If any filters remain click the X next to them to delete them.
  5. Set the first drop down to GCE VM Instance > All instance_id.
  6. And then the remaining drop down menus to: All logs, Any log level, Last hour.
  7. Verify the sorting is Newest logs first.

Your screen will look similar to:3c8830c59e3a7b63.png

  1. In the Edit Export section of the window, configure the export:
Sink Name vm_logs
Sink service BigQuery
Sink Destination Create new BigQuery dataset (name itproject_logs and click Create)
  1. Click Create Sink to save your export. Click Close to dismiss the results dialog.

You will now create an export for the Cloud HTTP Load Balancer logs to BigQuery.

  1. Click Create Export.
  2. Reset the viewer to the following settings:
  3. Verify there are no filters set. If any filters remain click the X next to them to delete them.
  4. Set the first drop down to Cloud HTTP Load Balancer> All forwarding_name_rule.
  5. In the Edit Export section of the window, configure the export:
Sink Name load_bal_logs
Sink service BigQuery
Sink Destination Select the BigQuery dataset project_logs
  1. Click Create Sink to save your export. Click Close to dismiss the results dialog.
  2. In the Navigation menu > Logging, click on Exports, and you should now see the exports you just defined. Note the Writer identity – this is a service account used to write your log entries into the target location. This account must have permissions to update the target.

Investigate the exported log entries

  1. Open BigQuery (Navigation menu > BigQuery). If prompted, log in with the Qwiklabs-provided credentials.
  2. The Welcome to BigQuery in the Cloud Consolemessage box opens. This message box provides a link to the quickstart guide and lists UI updates.
  3. Click Done.
  4. In the left pane in the Resources section, click your project (this starts with qwiklabs-gcp-xxx) you should see a projects_logs dataset under it.
  5. Verify that the BigQuery dataset has appropriate permissions to allow the export writer to store log entries. Click on the project_logs dataset, then click on the Share dataset. On the Dataset permission page you will see the service account as BigQuery Data Editor member. If it’s not already listed, you can add service account under Add members and grant data editor role.

dataset_permissions.png

  1. Expand the data set to see the table with your exported logs (click on dataset name to expand). You should see multiple tables – one for each type of log that’s receiving log entries.
  2. Take a moment to review the schemas and details of the tables that are being used.
  3. Click on the requests_20190326 table, then click on the Details button to see the number of rows and other metadata about the table.
  1. You can run all sorts of queries to analyze your archived log entries. For example, to see a breakdown of response codes issued by the global HTTP Load balancer used by your GKE cluster, paste the below query in the Query editor space and click Run (revising the table name so that it matches that shown in your BigQuery UI next to Table Details):
#standardSQL
  with requests AS (
  SELECT
    COUNT(*) req_count
  FROM
    `project_logs.requests_20190326`)
SELECT
  httpRequest.status AS status,
  COUNT(httpRequest) AS requests,
  ROUND(COUNT(httpRequest)/req_count * 100,1) AS percent
FROM
  `project_logs.requests_20190326`,
  requests
GROUP BY
  httpRequest.status,
  req_count
ORDER BY
  percent DESC

Feel free to experiment with some other queries that might provide interesting insights.

Click Check my progress to verify the objective.

Configure the export to BigQuery

Check my progress

Task 5. Creating Log Exclusions

The Stackdriver Logging service is free for the first 50GB of log entries ingested per project/month. For data volumes above 50GB per month, you pay $0.50/GB. Certain solution architectures can generate large volumes of log data, and thus potentially large bills for log ingestion.

Google makes it possible to exclude certain log entries so that they are not ingested and do not count against your free quota. If you exclude entries, but have an export sink enabled, the log entries will be exported and then discarded. If you do not have an export sink that catches the log entries, the data is discarded and lost.

In the next task you’ll review log ingestion and configure a log exclusion.

Review the basic interface

  1. Go to the Logs ingestion page in the Console (Navigation menu > Logging > Logs ingestion).

Take a moment to review the following info:

  1. This month’s ingested log volume
  2. Projected ingestion log volume
  3. Ingestion by service

Build an exclusion for the Cloud HTTP Load Balancer

The Cloud HTTP Load Balancer records a log entry for every request the load balancer receives. Obviously, this can generate very high volumes of log data, and it is not uncommon to exclude these from ingestion.

  1. Click on the three-dot menu in the far right column of the Cloud HTTP Load Balancer row.
  2. Select Disable log source, then click OK. This will exclude all log entries from this service. Alternatively, you could define an exclusion filter to exclude only some subset of entries generated by this service.

Verify that the exclusion and export are working

  1. Return to the Logs page (Navigation menu > Logging> Logs).
  2. Select the Cloud HTTP Load Balancer service from the first drop-down menu.
  3. Configure View Options to Show newest logs first.
  4. Enable log streaming. Watch for 30 seconds; you should not see any new log entries appearing.
  5. Return to the BigQuery UI and re-run the request summary query. The request numbers should be significantly larger. If you re-run the query a few times at 5 second intervals, you will see the requests are indeed being recorded into BigQuery even though they aren’t being ingested into Stackdriver.

Task 6. Creating a Logging Metric

Stackdriver allows you to create custom metrics based on the arrival of specific log entries. In this task, you will create a metric that you can use to generate alerts if too many web requests generate access denied log entries.

  1. Navigate to the Logs-based metrics page in the console (Navigation menu > Logging > Logs-based metrics).
  2. Start defining the metric by clicking CREATE METRIC.
  3. Create a simple filter, selecting the GCE VM Instanceservice from the first drop-down menu, and enter 403into the filter field. Press ENTER to apply the filter.
  4. In the Metric Editor section of the window, enter the following information:
Name 403s
Type Counter
  1. Click the Create Metric button.
  2. You will make use of this metric in the dashboarding and alerting portion of the lab.

Click Check my progress to verify the objective.

Create a logging metric

Check my progress

Task 7. Creating a Stackdriver account

A single Stackdriver account is used to monitor resources in one or more projects and AWS accounts, providing a single “pane-of-glass” for management.

Typically, you create a project just to hold the Stackdriver account configuration data. When you create the Stackdriver account from within that project, you can associate other GCP projects with the account, allowing for the one account to manage resources across multiple projects. You can also associate AWS accounts with the Stackdriver account, and any resources owned by those accounts will also be managed by the single Stackdriver account.

In this task, you will create and configure a Stackdriver account for use across your projects.

  1. Switch to the third project created by Qwiklabs (use the GCP Project ID 3 from the Qwiklabs Connection Details).
  2. Open Stackdriver Monitoring (Navigation menu >Monitoring).
  3. Click Create Workspace, if you see your project inGoogle Cloud Platform project box.
  4. When prompted, select the other two Qwiklabs-created projects, and click Continue.
  5. You will not be adding any AWS accounts, so click onSkip AWS Setup.
  6. The next screen offers direction on how to install the Stackdriver monitoring agents. This has already been done on your VMs. Click Continue.
  7. On the Get Reports by Email screen, select No reportsand click Continue.
  8. Wait until the Launch monitoring button becomes active, then click it to enter into the Stackdriver monitoring UI.

Task 8. Creating a Stackdriver dashboard

  1. In the left pane, click Dashboards > Create Dashboard.
  2. Click Untitled Dashboard, type Example Dashboard, and press ENTER.
  3. Click ADD CHART.
  4. For Title, give your chart a name of CPU Usage.
  5. For Find resource type and metric, type GCE VM and select GCE VM Instance.
  6. For Metrics, select CPU usage.
  7. Click SAVE to add this chart to your dashboard.
  8. Click the ADD CHART button again.
  9. Name the chart Network Traffic and set the metric toGCE VM Instance > Network Traffic.
  10. Explore the other options, such as Filter, Group By, and Aggregation.
  11. Click SAVE to add this chart to your dashboard as well.

Task 9. Finish up

Review

In this lab, you had the chance to do the following:

  1. View logs using a variety of filtering mechanisms
  2. Exclude log entries and disable log ingestion
  3. Export logs and run reports against exported logs
  4. Create and report on logging metrics
  5. Create a Stackdriver account used to monitor several GCP projects
  6. Create a metrics dashboard

Also, keep in mind that Stackdriver can collect logs and metrics data from non-GCP systems. Google provides Stackdriver agents that can be installed on AWS EC2 instances, and you can install fluentd and collectd agents on on-premise machines, enabling them to write data to the Stackdriver service.

End your lab

Installing Stackdriver Agents help

Task 1. Configure service accounts and role assignments

It’s a best practice to create service accounts for your VMs, and to assign those accounts the minimal set of roles required for the VMs to perform their jobs. In this task, you’ll create two service accounts (one for Linux VMs and one for Windows VMs) and assign them only the roles required to write log entries and metrics data into Stackdriver.

  1. Select Service accounts in the Google Cloud Console (Navigation menu > IAM & admin > Service accounts).
  2. Click CREATE SERVICE ACCOUNT. This account will be used by your Linux VM.
  3. Name the service account linux-servers.
  4. Assign the Logging > Logs Writer role to the service account.
  5. Assign the Monitoring > Monitoring Metric Writer to the service account.
  6. Save your new service account.
  7. Repeat the process, creating another service account named windows-servers, assigning the same roles. This account will be used for the Windows VM you create.

Click Check my progress to verify the objective.

Configure service accounts

Check my progress

Task 2. Create the VMs

In this task, you’ll create one Linux VM and one Windows VM. You will configure each VM to use an appropriate service account that will allow the installed agents to write their data to Stackdriver. The Linux VM will also have Apache2 installed to show how the logging agent acquires 3rd-party application logs.

Create Linux VM

  1. Select VM instances in the Google Cloud Console (Navigation menu > Compute Engine > VM instances).
  2. Click Create to create your Linux VM.
  3. Configure your VM with the following settings (leave other settings at defaults):
Property Value
Name linux-server
Region us-central1
Zone us-central1-a
Identity and API access > Service account linux-servers
Allow HTTP traffic Enabled
  1. Click Management, security, disks, networking, sole tenancy.
  2. Paste the following code into the Startup script field. This script will install the Apache web server on your VM:
#!/bin/bash
if [ ! -f /initialized.txt ]; then
    apt-get update
    apt-get install -y apache2
    touch /initialized.txt
fi
  1. Click Create.
  2. After the green checkbox appears next to your instance, wait another 30-60 seconds. Click on the external IP address assigned to your linux-server VM. This should open a new tab and show the Apache2 default page.

Create Windows VM

  1. In the VM instances page, click CREATE INSTANCE to create your Windows VM.
  2. Configure your VM with the following settings (leave other settings at defaults):
Property Value
Name windows-server
Region us-central1
Zone us-central1-a
Boot disk Window Server version 1803 Datacenter Core
Identity and API access > Service account windows-servers
Allow HTTP traffic Enabled
  1. Click Create.
  2. You may continue on to Task 3 while the Windows VM finishes starting.

Click Check my progress to verify the objective.

Create the VMs

Check my progress

Task 3. Install, configure, and test logging agents

In this task, you will install the Stackdriver logging agent on each VM. You will check the VM logs before and after agent installation to see how installing the agent results in more logs being recorded.

Investigate log entries before agent installation

  1. Go to Logging in the Google Cloud console (Navigation menu > Stackdriver > Logging).
  2. Using the first drop-down, select GCE VM Instance as the log resource category.
  3. Using the second drop-down, view the log types available. Leave the selection as All logs.
  4. Note the current log entries. There are only audit log entries, detailing actions such as creating the VMs. There are no syslog entries, Apache log entries, etc.

Install the Stackdriver Logging agent on Linux server

  1. Return to the VM instances page in the console, and SSH into the linux-server instance by clicking the SSH button in the Connect column.
  2. Check to see what version, if any, of the logging agent is installed by entering
dpkg -l google-fluentd
    You should see that the package isn't installed.
  1. Install the agent and verify installation by entering
curl -sSO https://dl.google.com/cloudagents/install-logging-agent.sh
sudo bash install-logging-agent.sh
  1. Check to see what version, if any, of the logging agent is installed by entering
dpkg -l google-fluentd
    You should see that the package is now installed.
  1. Close the SSH window.
  2. In the VM instances window, select the linux-serverand reset the instance.

Configure remote Windows management

  1. In your Qwiklabs user Chrome window, install the Chrome RDP for Google Cloud Platform application (https://goo.gl/ZabJn6).
  2. In the VM instances screen, click the drop-down next to RDP in the Connect column of the VM listing; selectSet Windows password.
  3. In the Set new Windows password modal dialog, clickSet. Note the password as you’ll need to use it later (you can copy/paste into a text editor or note-taking app, or write it down).

Install the Stackdriver Logging agent on the Windows server

  1. Click on the RDP button for your windows-serverinstance.
  2. When prompted, provide the Windows password and click OK.
  3. If prompted that the server certificate cannot be verified, click Continue to connect.
  4. Open Powershell by entering the following at the Windows command prompt:
powershell
  1. In Powershell, enter the following to download and install the agent:
invoke-webrequest https://dl.google.com/cloudagents/windows/StackdriverLogging-v1-8.exe -OutFile StackdriverLogging-v1-8.exe;
.\StackdriverLogging-v1-8.exe
  1. Follow the Stackdriver installer prompts, accepting all the default values.
  2. Click on the down arrow at the top of the RDP window, and select Ctrl+Alt+Del.

9ba1ef228e3d21b4.png

  1. Choose Sign out and close the Chrome RDP window.
  2. In the VM instances window, reset the window-serverinstance.

Verify that the logging agents are generating log entries

  1. In the console window, navigate to the Logging screen (Navigation menu > Stackdriver > Logging).
  2. From the first dropdown, select GCE VM Instance > linux-server.
  3. From the second dropdown, note the log types now available.
  4. Review the apache-access entries, apache-errorentries, and syslog entries.
  5. From the first dropdown, select GCE VM Instance > windows-server.
  6. From the second dropdown, note the log types now available.
  7. Review the winevt.raw entries. These entries are pulled from the Windows event logs on your windows-serverinstance.

Task 4. Install, configure, and test monitoring agents

In this task, you’ll install the monitoring agent on both VMs in much the same way you installed the logging agents.

Install the Linux monitoring agent

  1. SSH into the linux-server instance.
  2. Check to see if the agent is installed
dpkg -l stackdriver-agent
  1. You should see a message indicating the package isn’t installed.
  2. Run the following commands to install the agent:
curl -sSO https://dl.google.com/cloudagents/install-monitoring-agent.sh
sudo bash install-monitoring-agent.sh
  1. Check again to verify the agent is installed:
dpkg -l stackdriver-agent
  1. Download and activate Stackdriver’s Apache monitoring plugin.
cd /opt/stackdriver/collectd/etc/collectd.d/
sudo curl -O https://raw.githubusercontent.com/Stackdriver/stackdriver-agent-service-configs/master/etc/collectd.d/apache.conf
sudo service stackdriver-agent restart
  1. Close the SSH window

Install the Windows monitoring agent

  1. From the VM instances window, click on the RDP button for your window-server instance.
  2. When prompted provide the Windows password you noted earlier.
  3. Start Powershell
powershell
  1. In Powershell, enter the following to download and install the agent:
invoke-webrequest https://repo.stackdriver.com/windows/StackdriverMonitoring-GCM-46.exe -OutFile StackdriverMonitoring-GCM-46.exe;
.\StackdriverMonitoring-GCM-46.exe
  1. Follow the Stackdriver installer prompts, accepting all the default values.
  2. Click on the down arrow at the top of the RDP window, and select Ctrl+Alt+Del.
  3. Choose Sign out and close the Chrome RDP window.

Verify that the monitoring agents are generating metrics

The Apache plugin monitors the following metrics:

Active Connections (count): The number of active connections currently attached to Apache.

Idle Workers (count): The number of idle workers currently attached to Apache.

Requests (count/s): The number of requests per second serviced by Apache

  1. In the GCP console window, navigate to the Monitoring screen (Navigation menu > Stackdriver > Monitoring).
  2. If ask select your Qwiklab account. In the Create your free Workspace screen, click the Create workspacebutton. This will create a new Stackdriver monitoring workspace. A single Stackdriver workspace can contain multiple GCP project and provide a “single pane of glass” to monitor all resources.
  3. On the Add Google Cloud Platform projects to monitorscreen, click the Continue button.
  4. Click the Skip AWS Setup button. We will not setup AWS monitoring, but there are Stackdriver monitor agents available for AWS as well.
  5. On the Install the Stackdriver Agents screen, clickContinue. You have already done this.
  6. On the Get Reports by Email screen, for this lab clickNo reports and click Continue. In your actually account you may want to receive email reports.
  7. Wait a minute while Stackdriver monitoring configures and then click Launch Monitoring.
  8. Under Monitoring Resources, click on Resources. You should see Apache HTTP server under HOST:

    8337829aefb47737.png

  9. Click on Apache HTTP server and then click on thelinux-server instance in the resulting screen.
  10. Explore the metrics and information displayed for the Apache server.

Click Check my progress to verify the objective.

Install, configure, and test logging and monitoring agents

Check my progress

Task 5. Finish up

Configuring an Internal Load Balancer help

Configuring an Internal Load Balancer

1 hour 30 minutesFree

Overview

GCP offers Internal Load Balancing for your TCP/UDP-based traffic. Internal Load Balancing enables you to run and scale your services behind a private load balancing IP address that is accessible only to your internal virtual machine instances.

In this lab, you create two managed instance groups in the same region. Then you configure and test an internal load balancer with the instances groups as the backends, as shown in this network diagram:

network_diagram.png

Objectives

In this lab, you learn how to perform the following tasks:

  • Create HTTP and health check firewall rules
  • Configure two instance templates
  • Create two managed instance groups
  • Configure and test an internal load balancer

What you’ll need

To complete this lab, you’ll need:

  • Access to a standard internet browser (Chrome browser recommended).
  • Time. Note the lab’s Completion time in Qwiklabs. This is an estimate of the time it should take to complete all steps. Plan your schedule so you have time to complete the lab. Once you start the lab, you will not be able to pause and return later (you begin at step 1 every time you start a lab).
  • The lab’s Access time is how long your lab resources will be available. If you finish your lab with access time still available, you will be able to explore the Google Cloud Platform or work on any section of the lab that was marked “if you have time”. Once the Access time runs out, your lab will end and all resources will terminate.
  • You DO NOT need a Google Cloud Platform account or project. An account, project and associated resources are provided to you as part of this lab.
  • If you already have your own GCP account, make sure you do not use it for this lab.
  • If your lab prompts you to log into the console, use only the student account provided to you by the lab. This prevents you from incurring charges for lab activities in your personal GCP account.

Start your lab

When you are ready, click Start Lab. You can track your lab’s progress with the status bar at the top of your screen.

Find Your Lab’s GCP Username and Password

To access the resources and console for this lab, locate the Connection Details panel in Qwiklabs. Here you will find the account ID and password for the account you will use to log in to the Google Cloud Platform:

Open Google Console

If your lab provides other resource identifiers or connection-related information, it will appear on this panel as well.

Task 1. Configure HTTP and health check firewall rules

Configure firewall rules to allow HTTP traffic to the backends and TCP traffic from the GCP health checker.

Explore the my-internal-app network

The network my-internal-app with subnet-a andsubnet-b and firewall rules for RDP, SSH, and ICMPtraffic have been configured for you.

  • In the GCP Console, on the Navigation menu (Navigation menu), click VPC network > VPC networks. Notice the my-internal-app network with its subnets: subnet-a andsubnet-b.

    Each GCP project starts with the default network. In addition, the my-internal-app network has been created for you as part of your network diagram.

    You will create the managed instance groups insubnet-a and subnet-b. Both subnets are in the us-central1 region because an internal load balancer is a regional service. The managed instance groups will be in different zones, making your service immune to zonal failures.

Create the HTTP firewall rule

Create a firewall rule to allow HTTP traffic to the backends from the load balancer and the internet (to install Apache on the backends).

  1. On the Navigation menu (Navigation menu), click VPC network >Firewall rules. Notice the app-allow-icmp and app-allow-ssh-rdp firewall rules.

    These firewall rules have been created for you.

  2. Click Create Firewall Rule.
  3. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Name app-allow-http
    Network my-internal-app
    Targets Specified target tags
    Target tags lb-backend
    Source filter IP Ranges
    Source IP ranges 0.0.0.0/0
    Protocols and ports Specified protocols and ports
  4. For tcp, specify port 80.
  1. Click Create.

Create the health check firewall rules

Health checks determine which instances of a load balancer can receive new connections. For Internal Load Balancing, the health check probes to your load-balanced instances come from addresses in the ranges 130.211.0.0/22 and 35.191.0.0/16. Your firewall rules must allow these connections.

  1. Return to the Firewall rules page.
  2. Click Create Firewall Rule.
  3. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Name app-allow-health-check
    Network my-internal-app
    Targets Specified target tags
    Target tags lb-backend
    Source filter IP Ranges
    Source IP ranges 130.211.0.0/22 35.191.0.0/16
    Protocols and ports Specified protocols and ports
  4. For tcp, specify all ports.
  1. Click Create.

Click Check my progress to verify the objective.

Configure HTTP and health check firewall rules

Check my progress

Task 2. Configure instance templates and create instance groups

A managed instance group uses an instance template to create a group of identical instances. Use these to create the backends of the internal load balancer.

Configure the instance templates

An instance template is an API resource that you can use to create VM instances and managed instance groups. Instance templates define the machine type, boot disk image, subnet, labels, and other instance properties. Create an instance template for both subnets of the my-internal-app network.

  1. On the Navigation menu (Navigation menu), click Compute Engine >Instance templates.
  2. Click Create instance template.
  3. For Name, type instance-template-1
  4. Click Management, security, disks, networking, sole tenancy.
  5. Click Management.
  6. Under Metadata, specify the following:
    Key Value
    startup-script-url gs://cloud-training/gcpnet/ilb/startup.sh
  1. Click Networking.
  2. For Network interfaces, specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Network my-internal-app
    Subnetwork subnet-a
    Network tags lb-backend
  1. Click Create. Wait for the instance template to be created.

    Create another instance template for subnet-b by copying instance-template-1:

  2. Select the instance-template-1 and click Copy.
  3. Click Management, security, disks, networking, sole tenancy.
  4. Click Networking.
  5. For Network interfaces, select subnet-b as theSubnetwork.
  6. Click Create.

Click Check my progress to verify the objective.

Configure instance templates

Check my progress

Create the managed instance groups

Create a managed instance group in subnet-a (us-central1-a) and subnet-b (us-central1-b).

  1. On the Navigation menu (Navigation menu), click Compute Engine >Instance groups.
  2. Click Create Instance group.
  3. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Name instance-group-1
    Location Single-zone
    Region us-central1
    Zone us-central1-a
    Group type Managed instance group
    Instance template instance-template-1
    Autoscaling policy CPU usage
    Target CPU usage 80
    Minimum number of instances 1
    Maximum number of instances 5
    Cool-down period 45
  1. Click Create.

    Repeat the same procedure for instance-group-2 inus-central1-b:

  2. Click Create Instance group.
  3. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Name instance-group-2
    Location Single-zone
    Region us-central1
    Zone us-central1-b
    Group type Managed instance group
    Instance template instance-template-2
    Autoscaling policy CPU usage
    Target CPU usage 80
    Minimum number of instances 1
    Maximum number of instances 5
    Cool-down period 45
  4. Click Create.

Verify the backends

Verify that VM instances are being created in both subnets and create a utility VM to access the backends’ HTTP sites.

  1. On the Navigation menu, click Compute Engine > VM instances. Notice two instances that start withinstance-group-1 and instance-group-2.

    These instances are in separate zones, and their internal IP addresses are part of the subnet-a andsubnet-b CIDR blocks.

  2. Click Create Instance.
  3. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Name utility-vm
    Region us-central1
    Zone us-central1-f
    Machine type micro (1 shared vCPU)
  4. Click Management, security, disks, networking, sole tenancy.
  5. Click Networking.
  6. For Network interfaces, click the pencil icon to edit.
  7. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Network my-internal-app
    Subnetwork subnet-a
    Primary internal IP Ephemeral (Custom)
    Custom ephemeral IP address 10.10.20.50
  8. Click Done.
  9. Click Create.
  10. Note that the internal IP addresses for the backends are 10.10.20.2 and 10.10.30.2.
  1. For utility-vm, click SSH to launch a terminal and connect.
  2. To verify the welcome page for instance-group-1-xxxx, run the following command:
curl 10.10.20.2

The output should look like this (do not copy; this is example output):

<h1>Internal Load Balancing Lab</h1><h2>Client IP</h2>Your IP address : 10.10.20.50<h2>Hostname</h2>Server Hostname:
 instance-group-1-1zn8<h2>Server Location</h2>Region and Zone: us-central1-a
  1. To verify the welcome page for instance-group-2-xxxx, run the following command:
curl 10.10.30.2

The output should look like this (do not copy; this is example output):

<h1>Internal Load Balancing Lab</h1><h2>Client IP</h2>Your IP address : 10.10.20.50<h2>Hostname</h2>Server Hostname:
 instance-group-2-q5wp<h2>Server Location</h2>Region and Zone: us-central1-b
Which of these fields identify the location of the backend?

Server Hostname

Client IP

Server Location

Submit

  1. Close the SSH terminal to utility-vm:
exit

Task 3. Configure the internal load balancer

Configure the internal load balancer to balance traffic between the two backends (instance-group-1 in us-central1-a and instance-group-2 in us-central1-b), as illustrated in the network diagram:

network_diagram.png

Start the configuration

  1. In the GCP Console, on the Navigation menu (Navigation menu), clickNetwork Services > Load balancing.
  2. Click Create load balancer.
  3. Under TCP Load Balancing, click Start configuration.
  4. For Internet facing or internal only, select Only between my VMs.
  1. Click Continue.
  2. For Name, type my-ilb.

Configure the regional backend service

The backend service monitors instance groups and prevents them from exceeding configured usage.

  1. Click Backend configuration.
  2. Specify the following, and leave the remaining settings as their defaults:
    Property Value (select option as specified)
    Region us-central1
    Network my-internal-app
    Instance group instance-group-1 (us-central1-a)
  3. Click Done.
  4. Click Add backend.
  5. For Instance group, select instance-group-2 (us-central1-b).
  6. Click Done.
  7. For Health Check, select Create a health check.
  8. Specify the following, and leave the remaining settings as their defaults:
    Property Value (select option as specified)
    Name my-ilb-health-check
    Protocol TCP
    Port 80
  1. Click Save and Continue.
  2. Verify that there is a blue check mark next to Backend configuration in the GCP Console. If there isn’t, double-check that you have completed all the steps above.

Configure the frontend

The frontend forwards traffic to the backend.

  1. Click Frontend configuration.
  2. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Subnetwork subnet-b
    Internal IP Reserve a static internal IP address
  3. Specify the following, and leave the remaining settings as their defaults:
    Property Value (type value or select option as specified)
    Name my-ilb-ip
    Static IP address Let me choose
    Custom IP address 10.10.30.5
  4. Click Reserve.
  5. For Ports, type 80.
  6. Click Done.

Review and create the internal load balancer

  1. Click Review and finalize.
  2. Review the Backend and Frontend.
  3. Click Create. Wait for the load balancer to be created before moving to the next task.

Click Check my progress to verify the objective.

Configure the Internal Load Balancer

Check my progress

Task 4. Test the internal load balancer

Verify that the my-ilb IP address forwards traffic toinstance-group-1 in us-central1-a and instance-group-2 in us-central1-b.

Access the internal load balancer

  1. On the Navigation menu, click Compute Engine > VM instances.
  2. For utility-vm, click SSH to launch a terminal and connect.
  3. To verify that the internal load balancer forwards traffic, run the following command:
curl 10.10.30.5

The output should look like this (do not copy; this is example output):

<h1>Internal Load Balancing Lab</h1><h2>Client IP</h2>Your IP address : 10.10.20.50<h2>Hostname</h2>Server Hostname:
 instance-group-1-1zn8<h2>Server Location</h2>Region and Zone: us-central1-a
  1. Run the same command a couple of times:
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5
curl 10.10.30.5

You should be able to see responses from instance-group-1 in us-central1-a and instance-group-2 in us-central1-b. If not, run the command again.