The Wayback Machine - https://web.archive.org/web/20161214091313/https://cloudplatform.googleblog.com/

Google partners with Improbable to support next generation of video games

Tuesday, December 13, 2016

Google Cloud’s guiding philosophy is to enable what’s next, and gaming is one industry that’s constantly pushing what’s possible with technical innovation. At Google, we are no stranger to these advancements, from AlphaGo’s machine learning breakthrough to Pokemon GO’s achievements in scaling and mapping on GCP.

We are always seeking new partners who share our enthusiasm for innovation, and today we are announcing a partnership with Improbable, a company focused on building large-scale, complex online worlds through their distributed operating system, SpatialOS. As part of the partnership, Improbable is launching the SpatialOS Games Innovation Program, which provides game developers with credits to access Improbable’s technology powered by GCP and the freedom to get creative and experiment with what’s possible up until they launch the game. Today, game developers can join the SpatialOS open alpha, and start to prototype, test and deploy games to the cloud. The program will fully launch in Q1 2017, along with the SpatialOS beta.

SpatialOS allows game developers to create simulations of great scale (a single, highly detailed world can span hundreds of square miles), great complexity (millions of entities governed by realistic physics) and huge populations (thousands of players sharing the same world). These exciting new games are possible with SpatialOS plus the scalability, reliability and openness of GCP, including the use of Google Cloud Datastore’s fully managed NoSQL database and Google Compute Engine’s internal network, instance uptime, live migration and provisioning speed.



Bossa Studios is already using SpatialOS and GCP to build Worlds Adrift, a 3D massively multiplayer game set to launch in early 2017. In Worlds Adrift, thousands of players share a single world of floating islands that currently cover more than 1000km². Players form alliances, build sky-ships and become scavengers, explorers, heroes or pirates in an open, interactive world. They can steal ships and scavenge wrecks while the islands’ flora and fauna can flourish and decline over time.
A collision of two fully customized ships flying through the procedurally generated and persistent universe of Worlds Adrift. Read about the game’s origin story and technical details of its physics.

We see many opportunities for GCP to support developers building next-generation games and look forward to what game studios large and small will create out of our partnership with Improbable. To join the SpatialOS open alpha or learn more about the developer program visit SpatialOS.com.



Announcing new Google Cloud Client Libraries for four key services

Monday, December 12, 2016


Google Cloud Platform offers a range of services and APIs supported by an impressive backend infrastructure. But to benefit from the power and capabilities of our APIs, you as a developer also need a great client-side experience: client libraries you’ll actually want to use, that are well documented, and that are easy to access.

That’s why we are announcing today the beta release of the new Google Cloud Client Libraries for four of our cloud services: BigQuery, Google Cloud Datastore, Stackdriver Logging, and Google Cloud Storage. These libraries are idiomatic, well-documented, open-source, and cover seven server-side languages: C#, Go, Java, Node.js, PHP, Python, and Ruby. Most importantly, this new family of libraries is for GCP specifically and provides a consistent experience as you use each of these four services.

Finding client libraries fast

We want to make it easy for you to discover client libraries on cloud.google.com, so we updated our product documentation pages with a prominent client library section for each of these four products. Here’s what you can see in the left-hand navigation bar of the BigQuery documentation APIs & Reference section:



Click on the Client Libraries link to see the new Client Libraries page and select the language of your choice to learn how to install the library:


Right underneath the installation section, there’s a sample that shows how to make an API call. Set up auth using a single command, copy-paste the sample code and replace your variables, and you’ll be up and running in no time.



Lower in the page, you can find the links to access the library’s GitHub repo, ask a question on StackOverflow, or navigate to the client library reference for your specific language:


Client libraries you’ll want to use

The new Google Cloud Client Libraries were built with usability in mind from day one. We strive to make the libraries idiomatic and include the usage patterns you expect from your programming language -- so you feel right at home when you code against them.

They should also include plenty of samples. Each client library reference now includes a code example for every language and every API method showing you how to work with the API and best practices. For instance, the Node.js client library reference for BigQuery displays the following code with the createDataset method:

Furthermore, the product documentation on cloud.google.com for each of the four APIs contains many how-to guides with targeted samples for all our supported languages. For example, here is the code for learning how to stream data into BigQuery:





Next steps

This is just the beginning for Google Cloud Client Libraries. Our first job is to make the libraries for these four APIs generally available. We’ll also add support for more APIs, improve our documentation across the board, and keep adding more samples.

We take developer experience seriously and want to hear from you. Feel free to file issues on GitHub in one of our client library repositories or ask questions on StackOverflow.

Happy Coding!

Red Hat’s OpenShift Dedicated now generally available on Google Cloud

Thursday, December 8, 2016


Today Red Hat is releasing the general availability of their OpenShift Dedicated service running on Google Cloud Platform (GCP). This combination helps speed the adoption of Kubernetes, containers and cloud-native application patterns.

We often hear from customers that they need open source tools that enable their applications across both their own data centers and multiple cloud providers. Our collaboration with Red Hat around Kubernetes and OpenShift, is a great example of how we're committed to working with partners on open hybrid solutions.

OpenShift Dedicated on GCP offers a new option to enterprise IT organizations that want to use Red Hat container technology to deploy, manage and support their OpenShift instances. With OpenShift Dedicated, developers maintain control over the build and isolation process for their applications. Red Hat acts as the service provider, managing OpenShift Dedicated and offering support, helping customers focus more heavily on application development and business velocity. We'll also be working with Red Hat to make it easy for customers to augment their OpenShift applications with GCP’s broad and growing portfolio of services.

OpenShift and Kubernetes

As the second largest contributor to the project, Red Hat is a key collaborator helping to evolve and mature Kubernetes. Red Hat also uses Kubernetes as a foundation for Red Hat OpenShift Container Platform, which adds a service catalog, build automation, deployment automation and application lifecycle management to meet the needs of its enterprise customers.

OpenShift Dedicated is underpinned by Red Hat Enterprise Linux, and marries Red Hat’s enterprise-grade container application platform with Google’s 12+ years of operational expertise around containers (and the resulting optimization of our infrastructure for container-based workloads).

Enterprise developers who want to complement their on-premises infrastructure with cloud services and a global footprint, but who still want stable, more secure, open-source solutions, should try out OpenShift Dedicated on Google Cloud Platform, either as a complement to an on-premise OpenShift deployment or as a stand alone offering. You can sign up for the service here. We welcome your feedback on how to make the service even better.

Example application: analyzing a Tweet stream using OpenShift and Google BigQuery

We’re also working with Red Hat to make it easy for you to augment your OpenShift-based applications wherever they run. Below is an early example of using BigQuery, Google's managed data warehouse, and Google Cloud Pub/Sub, its real-time messaging service, with Red Hat OpenShift Dedicated. This can be the starting point to incorporate social insights into your own services.




Step 0: If you don’t have a GCP account already, please sign-up for Google Cloud Platform, setup billing and activate APIs.

Step 1: Next, set up a service account. A service account is a way to interact with your GCP resources by using a different identity than your primary login and is generally intended for server-to-server interaction. From the GCP Navigation Menu, click on "Permissions."
Once there, click on "Service accounts."
Click on "Create service account," which will prompt you to enter a service account name. Name your project and click on "Furnish a new private key." Select the default "JSON" Key type.
Step 2: Once you click "Create," a service account “.json” will be downloaded to your browser’s downloads location.

Important: Like any credential, this represents an access mechanism to authenticate and use resources in your GCP account — KEEP IT SAFE! Never place this file in a publicly accessible source repo (e.g., public GitHub).

Step 3: We’ll be using the JSON credential via a Kubernetes secret deployed to your OpenShift cluster. To do so, first perform a base64 encoding of your JSON credential file:

$ base64 -i ~/path/to/downloads/credentials.json

Keep the output (a very long string) ready for use in the next step, where you’ll replace‘BASE64_CREDENTIAL_STRING’ in the pod example (below) with the output of the base64 encoding.

Important: Note that base64 is encoded (not encrypted) and can be readily reversed, so this file (with the base64 string) should be treated with the same high degree of care as the credential file mentioned above.

Step 4: Create the Kubernetes secret inside your OpenShift cluster. A secret is the proper place to make sensitive information available to pods running in your cluster (like passwords or the credentials downloaded in the previous step). This is what your pod definition will look like (e.g., google-secret.yaml):

apiVersion: v1
kind: Secret
metadata:
  name: google-services-secret
type: Opaque
data:
  google-services.json: BASE64_CREDENTIAL_STRING


You’ll want to add this file to your source-control system (minus the credentials).

Replace ‘BASE64_CREDENTIAL_STRING’ with the base64 output from the prior step.

Step 5: Deploy the secret to the cluster:

$ oc create -f google-secret.yaml

Step 6: Now you can use Google APIs from your OpenShift cluster. To take your GCP-enabled cluster for a spin, try going through the steps detailed in Real-Time Data Analysis with Kubernetes, Cloud Pub/Sub and BigQuery, a solutions document. You’ll need to make two minor tweaks for the solution to work on your OpenShift cluster:

For any pod that needs to access Google APIs, modify it to create a reference to the secret, including exporting the environment variable “GOOGLE_APPLICATION_CREDENTIALS” to the pod (here’s more information on application default credentials).

In the PubSub-BiqQuery solution, that means you’ll modify two pod definitions:, pubsub/bigquery-controller.yaml and pubsub/twitter-stream.yaml

For example:

apiVersion: v1
kind: ReplicationController
metadata:
  name: bigquery-controller
  labels:
    name: bigquery-controller

    spec:
      containers:
      …
        env:
        … 
        - name: GOOGLE_APPLICATION_CREDENTIALS
          value: /etc/secretspath/google-services.json
        volumeMounts:
        - name: secrets
          mountPath: /etc/secretspath
          readOnly: true
      volumes:
      - name: secrets
        secret:
          secretName: google-services-secret


Step 7: Finally, anywhere the solution instructs you to use "kubectl," replace that with the equivalent OpenShift command "oc."

That’s it! If you follow along with the rest of the steps in the solution, you’ll soon be able to query (and see) tweets showing up in your BigQuery table — arriving via Cloud Pub/Sub. Going forward with your own deployments, all you need to do is follow the above steps of attaching the credential secret to any pod where you use Google Cloud SDKs and/or access Google APIs.

Build highly available services with general availability of Regional Managed Instance Groups

Monday, December 5, 2016


Businesses choose to build applications on Google Cloud Platform (GCP) for our low-latency and reliable global network. As customers build applications that are increasingly business-critical, designing for high-availability is no longer optional. That’s why we’re pleased to announce the general availability of Regional Managed Instance Groups in Google Compute Engine.

With virtually no effort on the part of customers, this release offers a fully managed service for creating highly available applications: simply specify the region in which to run your application, and Compute Engine automatically balances your machines across independent zones within the region. Combined with load balancing and autoscaling of your machine instances, your applications scale up and down gracefully based on policies fully within your control.

Distributing your application instances across multiple zones is a best practice that protects against adverse events such as a bad application build, networking problems or a zonal outage. Together with overprovisioning the size of your managed instance group, these practices ensure high availability for your applications in the regions where you serve your users.

Customers have vetted regional managed instance groups during our alpha and beta periods, ranging from major consumer-facing brands like Snap Inc. and Waze, to popular services like BreezoMeter, Carousell and InShorts.
It’s easy to get started with regional managed instance groups. Or let us know if we can assist with architecting your most important applications with the reliability users expect from today’s best cloud apps.


IBM’s software catalog now eligible to run on Google Cloud

Thursday, December 1, 2016


If your organization runs IBM software, we have news for you: Google Cloud Platform is now officially an IBM Eligible Public Cloud, meaning you can run a wide range of IBM software SKUs on Google Compute Engine with your existing licenses.

Under IBM's Bring Your Own Software License policy (BYOSL), customers who have licensed, or wish to license, IBM software through either Passport Advantage or an authorized reseller, may now run that software on Compute Engine. This applies to the majority of IBM's vast catalog of software -- everything from middleware and DevOps products (Websphere, MQ Series, DataPower, Tivoli) to data and analytics offerings (DB2, Informix, Cloudant, Cognos, BigInsights).


What comes next depends on you. Help us identify the IBM software that needs to be packaged, tuned, and optimized for Compute Engine. You can let us know what IBM software you plan to run on Google Cloud by taking this short survey. And feel free to reach out to me directly with any questions.

Making every (leap) second count with our new public NTP servers

Wednesday, November 30, 2016


As if 2016 wasn’t long enough, this year, a leap second will cause the last day of December to be one second longer than normal. But don’t worry, we’ve built support for the leap second into the time servers that regulate all Google services.

Even better, our Network Time Protocol (NTP) servers are now publicly available to anyone who needs to keep local clocks in sync with VM instances running on Google Compute Engine, to match the time used by Google APIs, or for those who just need a reliable time service. As you would expect, our public NTP service is backed by Google’s load balancers and atomic clocks in data centers around the world.
Here’s how we plan to handle the leap second and keep things running smoothly here at Google. It’s based on what we learned during the leap seconds in 2008, 2012 and 2015.

Leap seconds compensate for small and unpredictable changes in the Earth's rotation, as determined by the International Earth Rotation and Reference Systems Service (IERS). The IERS typically announces them six months in advance but the need for leap seconds is very irregular. This year, the leap second will happen at 23:59:60 UTC on December 31, or 3:59:60 pm PST.

No commonly used operating system is able to handle a minute with 61 seconds, and trying to special-case the leap second has caused many problems in the past. Instead of adding a single extra second to the end of the day, we'll run the clocks 0.0014% slower across the ten hours before and ten hours after the leap second, and “smear” the extra second across these twenty hours. For timekeeping purposes, December 31 will seem like any other day.

All Google services, including all APIs, will be synchronized on smeared time, as described above. You’ll also get smeared time for virtual machines on Compute Engine if you follow our recommended settings. You can use non-Google NTP servers if you don’t want your instances to use the leap smear, but don’t mix smearing and non-smearing time servers.

If you need any assistance, please visit our Getting Help page.

Happy New Year, and let the good times roll.

One PowerShell cmdlet to manage both Windows and Linux resources — no kidding!

Tuesday, November 29, 2016


If you're managing Google Cloud Platform (GCP) resources from the command line on Windows, chances are you’re using our Cloud Tools for PowerShell. Thanks to PowerShell’s powerful scripting environment, including its ability to pipeline objects, you can efficiently author complex scripts to automate and manipulate your GCP resources.

However, PowerShell has historically only been available on Windows. So even though you had an uber-sophisticated PowerShell script to set up and monitor multiple Google Compute Engines and Google Cloud SQL instances, if you wanted to run it on Linux, you would have had to rewrite it in bash!

Fortunately, Microsoft recently released an alpha version of PowerShell that works on both OS X and Ubuntu, and we built a .NET Core version of our Tools on top of it. Thanks to that, you don’t have to rewrite your Google Cloud PowerShell scripts anymore just to make them work on Mac or Linux machines.

To preview the bits, you'll have to:
  1. Install Google Cloud SDK and initialize it.
  2. Install PowerShell.
  3. Download and unzip Cross-Platform Cloud Tools for PowerShell bits.

Now, from your Linux or OS X terminal, check out the following commands:

# Fire up PowerShell.
powershell


# Import the Cloud Tools for PowerShell module on OS X.
PS > Import-Module ~/Downloads/osx.10.11-x64/Google.PowerShell.dll


# List all of the images in a GCS bucket.
Get-GcsObject -Bucket "quoct-photos" | Select Name, Size | Format-Table


If running GCP PowerShell cmdlets on Linux interests you, be sure to check out the post on how to run an ASP.NET Core app on Linux using Docker and Kubernetes. Because one thing is for certain  Google Cloud Platform is rapidly becoming a great place to run  and manage  Linux as well as Windows apps.

Happy scripting!
Morty Proxy This is a proxified and sanitized view of the page, visit original site.