Releases: googleapis/google-cloud-java
0.1.6
Features
DNS
gcloud-java-dns, a new client library to interact with Google Cloud DNS, is released and is in alpha. See the docs for more information and samples.
Resource Manager
- Project-level IAM (Identity and Access Management) functionality is now available. See docs and example code here.
Fixes
Big Query
startPageTokenis now calledpageToken(#774) andmaxResultsis now calledpageSize(#745) to be consistent with page-based listing methods in othergcloud-javamodules.
Storage
- Default content type, once a required field for bucket creation and copying/composing blobs, is now removed (#288, #762).
- A new boolean
overrideInfois added to copy requests to denote whether metadata should be overridden (#762). startPageTokenis now calledpageToken(#774) andmaxResultsis now calledpageSize(#745) to be consistent with page-based listing methods in othergcloud-javamodules.
0.1.5
Features
Storage
- Add
versions(boolean versions)option toBlobListOptionto enable/disable versionedBloblisting. If enabled all versions of an object as distinct results (#688). BlobTargetOptionandBlobWriteOptionclasses are added toBucketto allow setting options forcreatemethods (#705).
Fixes
BigQuery
- Fix pagination when listing tables and dataset with selected fields (#668).
Core
- Fix authentication issue when using revoked Cloud SDK credentials with local test helpers. The
NoAuthCredentialsclass is added with theAuthCredentials.noAuth()method, to ne used when testing service against local emulators (#719).
Storage
- Fix pagination when listing blobs and buckets with selected fields (#668).
- Fix wrong usage of
Storage.BlobTargetOptionandStorage.BlobWriteOptioninBucket'screatemethods. New classes (Bucket.BlobTargetOptionandBucket.BlobWriteOption) are added to provide options toBucket.create(#705). - Fix "Failed to parse Content-Range header" error when
BlobWriteChannelwrites a blob whose size is a multiple of the chunk size used (#725). - Fix NPE when reading with
BlobReadChannela blob whose size is a multiple of the chunk/buffer size (#725).
0.1.4
Features
BigQuery
- The
JobInfoandTableInfoclass hierarchies are flattened (#584, #600). Instead,JobInfocontains a fieldJobConfiguration, which is subclassed to provide configurations for different types of jobs. Likewise,TableInfocontains a new fieldTableDefinition, which is subclassed to provide table settings depending on the table type. - Functional classes (
Job,Table,Dataset) now extend their associated metadata classes (JobInfo,TableInfo,DatasetInfo) (#530, #609). TheBigQueryservice methods now return functional objects instead of the metadata objects.
Datastore
-
Setting list properties containing values of a single type is more concise (#640, #648).
For example, to set a list of string values as a property on an entity, you'd previously have to type:
someEntity.set("someStringListProperty", StringValue.of("a"), StringValue.of("b"), StringValue.of("c"));
Now you can set the property using:
someEntity.set("someStringListProperty", "a", "b", "c");
-
There is now a more concise way to get the parent of an entity key (#640, #648).
Key parentOfCompleteKey = someKey.parent();
-
The consistency setting (defaults to 0.9 both before and after this change) can be set in
LocalGcdHelper(#639, #648). -
You no longer have to cast or use the unknown type when getting a
ListValuefrom an entity (#648). Now you can use something like the following to get a list of double values:List<DoubleValue> doublesList = someEntity.get("myDoublesList");
ResourceManager
- Paging for the
ResourceManagerlistmethod is now supported. (#651) Projectis now a subclass ofProjectInfo(#530). TheResourceManagerservice methods now returnProjectinstead ofProjectInfo.
Storage
- Functional classes (
Bucket,Blob) now extend their associated metadata classes (BucketInfo,BlobInfo) (#530, #603, #614). TheStorageservice methods now return functional objects instead of metadata objects.
Fixes
BigQuery
0.1.3
Features
BigQuery
-
Resumable uploads via write channel are now supported (#540)
An example of uploading a CSV file in chunks of CHUNK_SIZE bytes:
try (FileChannel fileChannel = FileChannel.open(Paths.get("/path/to/your/file"))) { ByteBuffer buffer = ByteBuffer.allocate(256 * 1024); TableId tableId = TableId.of("YourDataset", "YourTable"); LoadConfiguration configuration = LoadConfiguration.of(tableId, FormatOptions.of("CSV")); WriteChannel writeChannel = bigquery.writer(configuration); long position = 0; long written = fileChannel.transferTo(position, CHUNK_SIZE, writeChannel); while (written > 0) { position += written; written = fileChannel.transferTo(position, CHUNK_SIZE, writeChannel); } writeChannel.close(); }
-
defaultDataset(String dataset)(inQueryJobInfoandQueryRequest) can be used to specify a default dataset (#567).
Storage
- The name of the method to submit a batch request has changed from
applytosubmit(#562).
Fixes
BigQuery
hashCodeandequalsare now overridden in subclasses ofBaseTableInfo(#565, #573).jobCompleteis renamed tojobCompletedinQueryResult(#567).
Datastore
-
The precondition check that cursors are UTF-8 strings has been removed (#578).
-
EntityQuery,KeyQuery, andProjectionEntityQueryclasses have been introduced (#585). This enables users to use modify projections and group by clauses for projection entity queries after usingtoBuilder(). For example, this now works:ProjectionEntityQuery query = Query.projectionEntityQueryBuilder() .kind("Person") .projection(Projection.property("name")) .build(); ProjectionEntityQuery newQuery = query.toBuilder().projection(Projection.property("favorite_food")).build();
0.1.2
Features
Core
-
By default, requests are now retried (#547).
For example:
// Use the default retry strategy Storage storageWithRetries = StorageOptions.defaultInstance().service(); // Don't use retries Storage storageWithoutRetries = StorageOptions.builder() .retryParams(RetryParams.noRetries()) .build() .service()
BigQuery
- Functional classes for datasets, jobs, and tables are added (#516)
- Query Plan is now supported (#523).
- Template suffix is now supported (#514).
Fixes
Datastore
-
QueryResults.cursorAfter()is now set when all results from a query have been exhausted (#549).When running large queries, users may see Datastore-internal errors with code 500 due to a Datastore issue. This issue will be fixed in the next version of Datastore. Until then, users can set a limit on their query and use the cursor to get more results in subsequent queries. Here is an example:
int limit = 100; StructuredQuery<Entity> query = Query.entityQueryBuilder() .kind("user") .limit(limit) .build(); while (true) { QueryResults<Entity> results = datastore.run(query); int resultCount = 0; while (results.hasNext()) { Entity result = results.next(); // consume all results // do something with the result resultCount++; } if (resultCount < limit) { break; } query = query.toBuilder().startCursor(results.cursorAfter()).build(); }
-
loadis renamed togetin functional classes (#535)
0.1.1
Features
BigQuery
-
Introduce support for Google Cloud BigQuery (#503): create datasets and tables, manage jobs, insert and list table data. See BigQueryExample for a complete example or API Documentation for
gcloud-java-bigqueryjavadoc.import com.google.gcloud.bigquery.BaseTableInfo; import com.google.gcloud.bigquery.BigQuery; import com.google.gcloud.bigquery.BigQueryOptions; import com.google.gcloud.bigquery.Field; import com.google.gcloud.bigquery.JobStatus; import com.google.gcloud.bigquery.LoadJobInfo; import com.google.gcloud.bigquery.Schema; import com.google.gcloud.bigquery.TableId; import com.google.gcloud.bigquery.TableInfo; BigQuery bigquery = BigQueryOptions.defaultInstance().service(); TableId tableId = TableId.of("dataset", "table"); BaseTableInfo info = bigquery.getTable(tableId); if (info == null) { System.out.println("Creating table " + tableId); Field integerField = Field.of("fieldName", Field.Type.integer()); bigquery.create(TableInfo.of(tableId, Schema.of(integerField))); } else { System.out.println("Loading data into table " + tableId); LoadJobInfo loadJob = LoadJobInfo.of(tableId, "gs://bucket/path"); loadJob = bigquery.create(loadJob); while (loadJob.status().state() != JobStatus.State.DONE) { Thread.sleep(1000L); loadJob = bigquery.getJob(loadJob.jobId()); } if (loadJob.status().error() != null) { System.out.println("Job completed with errors"); } else { System.out.println("Job succeeded"); } }
Resource Manager
-
Introduce support for Google Cloud Resource Manager (#495): get a list of all projects associated with an account, create/update/delete projects, undelete projects that you don't want to delete. See ResourceManagerExample for a complete example or API Documentation for
gcloud-java-resourcemanagerjavadoc.import com.google.gcloud.resourcemanager.ProjectInfo; import com.google.gcloud.resourcemanager.ResourceManager; import com.google.gcloud.resourcemanager.ResourceManagerOptions; import java.util.Iterator; ResourceManager resourceManager = ResourceManagerOptions.defaultInstance().service(); // Replace "some-project-id" with an existing project's ID ProjectInfo myProject = resourceManager.get("some-project-id"); ProjectInfo newProjectInfo = resourceManager.replace(myProject.toBuilder() .addLabel("launch-status", "in-development").build()); System.out.println("Updated the labels of project " + newProjectInfo.projectId() + " to be " + newProjectInfo.labels()); // List all the projects you have permission to view. Iterator<ProjectInfo> projectIterator = resourceManager.list().iterateAll(); System.out.println("Projects I can view:"); while (projectIterator.hasNext()) { System.out.println(projectIterator.next().projectId()); }
Storage
- Remove the
RemoteGcsHelper.create(String, String)method (#494)
Fixes
Datastore
- HTTP Transport is now specified in
DefaultDatastoreRpc(#448)
0.1.0
Features
Core
-
The project ID set in the Google Cloud SDK now supersedes the project ID set by Compute Engine (#337).
Before
The project ID is determined by iterating through the following list in order, stopping when a valid project ID is found:
- Project ID supplied when building the service options
- Project ID specified by the environment variable
GCLOUD_PROJECT - App Engine project ID
- Compute Engine project ID
- Google Cloud SDK project ID
After
- Project ID supplied when building the service options
- Project ID specified by the environment variable
GCLOUD_PROJECT - App Engine project ID
- Google Cloud SDK project ID
- Compute Engine project ID
-
The explicit
AuthCredentials.noCredentialsoption was removed.
Storage
-
The testing helper class
RemoteGCSHelpernow usesGOOGLE_APPLICATION_CREDENTIALSandGCLOUD_PROJECTenvironment variables to set credentials and project (#335, #339).Before
export GCLOUD_TESTS_PROJECT_ID="MY_PROJECT_ID" export GCLOUD_TESTS_KEY=/path/to/my/key.jsonAfter
export GCLOUD_PROJECT="MY_PROJECT_ID" export GOOGLE_APPLICATION_CREDENTIALS=/path/to/my/key.json -
BlobReadChannelthrows aStorageExceptionif a blob is updated during a read (#359, #390) -
generationis moved fromBlobInfotoBlobId, andgenerationMatchandgenerationNotMatchmethods are added toBlobSourceOptionandBlobGetOption(#363, #366).Before
BlobInfo myBlobInfo = someAlreadyExistingBlobInfo.toBuilder().generation(1L);
After
BlobId myBlobId = BlobId.of("bucketName", "idName", 1L);
-
The
Blob's batch delete method now returns false for blobs that were not found (#380).
Fixes
Core
- An exception is no longer thrown when reading the default project ID in the App Engine environment (#378).
SocketTimeoutExceptionsare now retried (#410, #414).
Datastore
- A
SocketExceptionexception is no longer thrown when creating the Datastore service object from within the App Engine production environment (#411).