These crawls are part of an effort to archive pages as they are created and archive the pages that they refer to. That way, as the pages that are referenced are changed or taken from the web, a link to the version that was live when the page was written will be preserved.
Then the Internet Archive hopes that references to these archived pages will be put in place of a link that would be otherwise be broken, or a companion link to allow people to see what was originally intended by a page's authors.
This is a collection of web page captures from links added to, or changed on, Wikipedia pages. The idea is to bring a reliability to Wikipedia outlinks so that if the pages referenced by Wikipedia articles are changed, or go away, a reader can permanently find what was originally referred to.
# List my public Google+ activities.
result = service.activities().list(userId='me', collection='public').execute()
tasks = result.get('items', [])
for task in tasks:
print task['title']
Handle auth with fewer lines of code
The easy-to-use authentication library can reduce the code you have to write for OAuth 2.0. Sometimes a few lines is all you need.
# Authorize server-to-server interactions from Google Compute Engine.
from oauth2client.contrib import gce
credentials = gce.AppAssertionCredentials(
scope='https://www.googleapis.com/auth/devstorage.read_write')
http = credentials.authorize(httplib2.Http())
Use the library with Google App Engine
App Engine-specific helpers make quick work of authenticated calls to APIs. No need to worry about exchanging code for tokens; get right down to writing the meat of your application.
# Restrict access to users who've granted access to Calendar info.
decorator = appengine.OAuth2DecoratorFromClientSecrets(
'client_secrets.json',
scope='https://www.googleapis.com/auth/calendar')
class MainHandler(webapp.RequestHandler):
@decorator.oauth_required
def get(self):
http = decorator.http()
request = service.events().list(calendarId='primary')