Skip to content

Commit 3c5146c

Browse files
xinlian12annie-mac
andauthored
Corrected misspelled instances of "ServicePrincipal" as "ServicePrinciple" (Azure#37121)
* fix misspelled servicePrincipal --------- Co-authored-by: annie-mac <xinlian@microsoft.com>
1 parent df200a0 commit 3c5146c

File tree

10 files changed

+205
-92
lines changed

10 files changed

+205
-92
lines changed

sdk/cosmos/azure-cosmos-spark_3-1_2-12/CHANGELOG.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@
1717

1818
#### Bugs Fixed
1919
* Fixed an issue with backpressure when using WriteStrategy `ItemBulkUpdate` - with this write strategy a Reactor operator `bufferTimeout` was used, which has issues when backpressure happens and can result in an error `verflowException: Could not emit buffer due to lack of requests`. See [PR 37072](https://github.com/Azure/azure-sdk-for-java/pull/37072)
20+
* Fixed misspelled authType from `ServicePrinciple` to `ServicePrincipal`. For back compatibility support, `ServicePrinciple` will still be supported in the config - See [PR 37121](https://github.com/Azure/azure-sdk-for-java/pull/37121)
2021

2122
### 4.22.0 (2023-09-19)
2223

@@ -82,7 +83,7 @@
8283
### 4.17.0 (2023-02-17)
8384

8485
#### Features Added
85-
* Added Service Principle based AAD Auth - See [PR 32393](https://github.com/Azure/azure-sdk-for-java/pull/32393) and [PR 33449](https://github.com/Azure/azure-sdk-for-java/pull/33449)
86+
* Added Service Principal based AAD Auth - See [PR 32393](https://github.com/Azure/azure-sdk-for-java/pull/32393) and [PR 33449](https://github.com/Azure/azure-sdk-for-java/pull/33449)
8687
* Added capability to allow modification of throughput in Spark via `ALTER TABLE` or `ALTER DATABASE` command. - See [PR 33369](https://github.com/Azure/azure-sdk-for-java/pull/33369)
8788

8889
#### Bugs Fixed

sdk/cosmos/azure-cosmos-spark_3-2_2-12/CHANGELOG.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@
1717

1818
#### Bugs Fixed
1919
* Fixed an issue with backpressure when using WriteStrategy `ItemBulkUpdate` - with this write strategy a Reactor operator `bufferTimeout` was used, which has issues when backpressure happens and can result in an error `verflowException: Could not emit buffer due to lack of requests`. See [PR 37072](https://github.com/Azure/azure-sdk-for-java/pull/37072)
20+
* Fixed misspelled authType from `ServicePrinciple` to `ServicePrincipal`. For back compatibility support, `ServicePrinciple` will still be supported in the config - See [PR 37121](https://github.com/Azure/azure-sdk-for-java/pull/37121)
2021

2122
### 4.22.0 (2023-09-19)
2223

@@ -77,16 +78,16 @@
7778
### 4.17.1 (2023-02-27)
7879

7980
#### Bugs Fixed
80-
- Fixed LSN offset for Spark 2 -> Spark 3 offset conversion UDF function - See [PR 33757](https://github.com/Azure/azure-sdk-for-java/pull/33757)
81+
* Fixed LSN offset for Spark 2 -> Spark 3 offset conversion UDF function - See [PR 33757](https://github.com/Azure/azure-sdk-for-java/pull/33757)
8182

8283
### 4.17.0 (2023-02-17)
8384

8485
#### Features Added
85-
* Added Service Principle based AAD Auth - See [PR 32393](https://github.com/Azure/azure-sdk-for-java/pull/32393) and [PR 33449](https://github.com/Azure/azure-sdk-for-java/pull/33449)
86+
* Added Service Principal based AAD Auth - See [PR 32393](https://github.com/Azure/azure-sdk-for-java/pull/32393) and [PR 33449](https://github.com/Azure/azure-sdk-for-java/pull/33449)
8687
* Added capability to allow modification of throughput in Spark via `ALTER TABLE` or `ALTER DATABASE` command. - See [PR 33369](https://github.com/Azure/azure-sdk-for-java/pull/33369)
8788

8889
#### Bugs Fixed
89-
- Change feed pull API is using an incorrect key value for collection lookup, which can result in using the old collection in collection recreate scenarios. - See [PR 33178](https://github.com/Azure/azure-sdk-for-java/pull/33178)
90+
* Change feed pull API is using an incorrect key value for collection lookup, which can result in using the old collection in collection recreate scenarios. - See [PR 33178](https://github.com/Azure/azure-sdk-for-java/pull/33178)
9091

9192
### 4.16.0 (2023-01-13)
9293

sdk/cosmos/azure-cosmos-spark_3-3_2-12/CHANGELOG.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@
1717

1818
#### Bugs Fixed
1919
* Fixed an issue with backpressure when using WriteStrategy `ItemBulkUpdate` - with this write strategy a Reactor operator `bufferTimeout` was used, which has issues when backpressure happens and can result in an error `verflowException: Could not emit buffer due to lack of requests`. See [PR 37072](https://github.com/Azure/azure-sdk-for-java/pull/37072)
20+
* Fixed misspelled authType from `ServicePrinciple` to `ServicePrincipal`. For back compatibility support, `ServicePrinciple` will still be supported in the config - See [PR 37121](https://github.com/Azure/azure-sdk-for-java/pull/37121)
2021

2122
### 4.22.0 (2023-09-19)
2223

@@ -77,16 +78,16 @@
7778
### 4.17.1 (2023-02-27)
7879

7980
#### Bugs Fixed
80-
- Fixed LSN offset for Spark 2 -> Spark 3 offset conversion UDF function - See [PR 33757](https://github.com/Azure/azure-sdk-for-java/pull/33757)
81+
* Fixed LSN offset for Spark 2 -> Spark 3 offset conversion UDF function - See [PR 33757](https://github.com/Azure/azure-sdk-for-java/pull/33757)
8182

8283
### 4.17.0 (2023-02-17)
8384

8485
#### Features Added
85-
* Added Service Principle based AAD Auth - See [PR 32393](https://github.com/Azure/azure-sdk-for-java/pull/32393) and [PR 33449](https://github.com/Azure/azure-sdk-for-java/pull/33449)
86+
* Added Service Principal based AAD Auth - See [PR 32393](https://github.com/Azure/azure-sdk-for-java/pull/32393) and [PR 33449](https://github.com/Azure/azure-sdk-for-java/pull/33449)
8687
* Added capability to allow modification of throughput in Spark via `ALTER TABLE` or `ALTER DATABASE` command. - See [PR 33369](https://github.com/Azure/azure-sdk-for-java/pull/33369)
8788

8889
#### Bugs Fixed
89-
- Change feed pull API is using an incorrect key value for collection lookup, which can result in using the old collection in collection recreate scenarios. - See [PR 33178](https://github.com/Azure/azure-sdk-for-java/pull/33178)
90+
* Change feed pull API is using an incorrect key value for collection lookup, which can result in using the old collection in collection recreate scenarios. - See [PR 33178](https://github.com/Azure/azure-sdk-for-java/pull/33178)
9091

9192
### 4.16.0 (2023-01-13)
9293

sdk/cosmos/azure-cosmos-spark_3-4_2-12/CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@
1717

1818
#### Bugs Fixed
1919
* Fixed an issue with backpressure when using WriteStrategy `ItemBulkUpdate` - with this write strategy a Reactor operator `bufferTimeout` was used, which has issues when backpressure happens and can result in an error `verflowException: Could not emit buffer due to lack of requests`. See [PR 37072](https://github.com/Azure/azure-sdk-for-java/pull/37072)
20+
* Fixed misspelled authType from `ServicePrinciple` to `ServicePrincipal`. For back compatibility support, `ServicePrinciple` will still be supported in the config - See [PR 37121](https://github.com/Azure/azure-sdk-for-java/pull/37121)
2021

2122
### 4.22.0 (2023-09-19)
2223

sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/Scala/NYC-Taxi-Data/01_Batch_AAD.scala

Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,17 @@
11
// Databricks notebook source
22
// MAGIC %md
33
// MAGIC **Secrets**
4-
// MAGIC
4+
// MAGIC
55
// MAGIC The secrets below like the Cosmos account key are retrieved from a secret scope. If you don't have defined a secret scope for a Cosmos Account you want to use when going through this sample you can find the instructions on how to create one here:
66
// MAGIC - Here you can [Create a new secret scope](./#secrets/createScope) for the current Databricks workspace
7-
// MAGIC - See how you can create an [Azure Key Vault backed secret scope](https://docs.microsoft.com/azure/databricks/security/secrets/secret-scopes#--create-an-azure-key-vault-backed-secret-scope)
7+
// MAGIC - See how you can create an [Azure Key Vault backed secret scope](https://docs.microsoft.com/azure/databricks/security/secrets/secret-scopes#--create-an-azure-key-vault-backed-secret-scope)
88
// MAGIC - See how you can create a [Databricks backed secret scope](https://docs.microsoft.com/azure/databricks/security/secrets/secret-scopes#create-a-databricks-backed-secret-scope)
99
// MAGIC - And here you can find information on how to [add secrets to your Spark configuration](https://docs.microsoft.com/azure/databricks/security/secrets/secrets#read-a-secret)
1010
// MAGIC If you don't want to use secrets at all you can of course also just assign the values in clear-text below - but for obvious reasons we recommend the usage of secrets.
1111

1212
// COMMAND ----------
1313

14-
val authType = "ServicePrinciple"
14+
val authType = "ServicePrincipal"
1515
val cosmosEndpoint = spark.conf.get("spark.cosmos.accountEndpoint")
1616
val subscriptionId = spark.conf.get("spark.cosmos.subscriptionId")
1717
val tenantId = spark.conf.get("spark.cosmos.tenantId")
@@ -31,7 +31,7 @@ val clientSecret = spark.conf.get("spark.cosmos.aad.clientSecret")
3131
// COMMAND ----------
3232

3333
// MAGIC %md
34-
// MAGIC
34+
// MAGIC
3535
// MAGIC Configure the Catalog API to be used for main workload
3636

3737
// COMMAND ----------
@@ -49,7 +49,7 @@ spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.views.repositoryPat
4949
// COMMAND ----------
5050

5151
// MAGIC %md
52-
// MAGIC
52+
// MAGIC
5353
// MAGIC Configure the Catalog API to be used for throughput control. This will only be needed if different account is used for throughput control
5454

5555
// COMMAND ----------
@@ -68,25 +68,25 @@ spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.views.repositoryPat
6868

6969
// MAGIC %sql
7070
// MAGIC CREATE DATABASE IF NOT EXISTS cosmosCatalog.SampleDatabase;
71-
// MAGIC
71+
// MAGIC
7272
// MAGIC CREATE TABLE IF NOT EXISTS cosmosCatalog.SampleDatabase.GreenTaxiRecords
7373
// MAGIC USING cosmos.oltp
7474
// MAGIC TBLPROPERTIES(partitionKeyPath = '/id', autoScaleMaxThroughput = '100000', indexingPolicy = 'OnlySystemProperties');
75-
// MAGIC
75+
// MAGIC
7676
// MAGIC CREATE TABLE IF NOT EXISTS cosmosCatalog.SampleDatabase.GreenTaxiRecordsCFSink
7777
// MAGIC USING cosmos.oltp
7878
// MAGIC TBLPROPERTIES(partitionKeyPath = '/id', autoScaleMaxThroughput = '100000', indexingPolicy = 'OnlySystemProperties');
79-
// MAGIC
79+
// MAGIC
8080
// MAGIC /* NOTE: It is important to enable TTL (can be off/-1 by default) on the throughput control container */
8181
// MAGIC /* If you are using a different account for throughput control, then please reference following commented examples */
8282
// MAGIC CREATE TABLE IF NOT EXISTS cosmosCatalog.SampleDatabase.ThroughputControl
8383
// MAGIC USING cosmos.oltp
8484
// MAGIC OPTIONS(spark.cosmos.database = 'SampleDatabase')
8585
// MAGIC TBLPROPERTIES(partitionKeyPath = '/groupId', autoScaleMaxThroughput = '4000', indexingPolicy = 'AllProperties', defaultTtlInSeconds = '-1');
86-
// MAGIC
86+
// MAGIC
8787
// MAGIC -- /* If you are using a different account for throughput control, then please use throughput control catalog account for initializing containers */
8888
// MAGIC -- CREATE DATABASE IF NOT EXISTS throughputControlCatalog.SampleDatabase;
89-
// MAGIC
89+
// MAGIC
9090
// MAGIC -- CREATE TABLE IF NOT EXISTS throughputControlCatalog.SampleDatabase.ThroughputControl
9191
// MAGIC -- USING cosmos.oltp
9292
// MAGIC -- OPTIONS(spark.cosmos.database = 'SampleDatabase')
@@ -96,7 +96,7 @@ spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.views.repositoryPat
9696

9797
// MAGIC %md
9898
// MAGIC **Preparation - loading data source "[NYC Taxi & Limousine Commission - green taxi trip records](https://azure.microsoft.com/services/open-datasets/catalog/nyc-taxi-limousine-commission-green-taxi-trip-records/)"**
99-
// MAGIC
99+
// MAGIC
100100
// MAGIC The green taxi trip records include fields capturing pick-up and drop-off dates/times, pick-up and drop-off locations, trip distances, itemized fares, rate types, payment types, and driver-reported passenger counts. This data set has over 80 million records (>8 GB) of data and is available via a publicly accessible Azure Blob Storage Account located in the East-US Azure region.
101101

102102
// COMMAND ----------
@@ -123,8 +123,8 @@ spark.conf.set(
123123
blob_sas_token)
124124
print(s"Remote blob path: ${wasbs_path}")
125125
// SPARK read parquet, note that it won't load any data yet by now
126-
// NOTE - if you want to experiment with larger dataset sizes - consider switching to Option B (commenting code
127-
// for Option A/uncommenting code for option B) the lines below or increase the value passed into the
126+
// NOTE - if you want to experiment with larger dataset sizes - consider switching to Option B (commenting code
127+
// for Option A/uncommenting code for option B) the lines below or increase the value passed into the
128128
// limit function restricting the dataset size below
129129

130130
// ------------------------------------------------------------------------------------
@@ -152,7 +152,7 @@ print("Finished preparation: ${formatter.format(Instant.now)}")
152152

153153
// MAGIC %md
154154
// MAGIC ** Sample - ingesting the NYC Green Taxi data into Cosmos DB**
155-
// MAGIC
155+
// MAGIC
156156
// MAGIC By setting the target throughput threshold to 0.95 (95%) we reduce throttling but still allow the ingestion to consume most of the provisioned throughput. For scenarios where ingestion should only take a smaller subset of the available throughput this threshold can be reduced accordingly.
157157

158158
// COMMAND ----------
@@ -199,7 +199,7 @@ println(s"Finished ingestion: ${formatter.format(Instant.now)}")
199199
// COMMAND ----------
200200

201201
val count_source = spark.sql("SELECT * FROM source").count()
202-
println(s"Number of records in source: ${count_source}")
202+
println(s"Number of records in source: ${count_source}")
203203

204204
// COMMAND ----------
205205

@@ -230,7 +230,7 @@ val readCfg = Map(
230230
val count_query_schema=StructType(Array(StructField("Count", LongType, true)))
231231
val query_df = spark.read.format("cosmos.oltp").schema(count_query_schema).options(readCfg).load()
232232
val count_query = query_df.agg(sum("Count").as("TotalCount")).first.getLong(0)
233-
println(s"Number of records retrieved via query: ${count_query}")
233+
println(s"Number of records retrieved via query: ${count_query}")
234234
println(s"Finished validation via query: ${formatter.format(Instant.now)}")
235235

236236
assert(count_source == count_query)
@@ -260,7 +260,7 @@ val changeFeedCfg = Map(
260260
)
261261
val changeFeed_df = spark.read.format("cosmos.oltp.changeFeed").options(changeFeedCfg).load()
262262
val count_changeFeed = changeFeed_df.count()
263-
println(s"Number of records retrieved via change feed: ${count_changeFeed}")
263+
println(s"Number of records retrieved via change feed: ${count_changeFeed}")
264264
println(s"Finished validation via change feed: ${formatter.format(Instant.now)}")
265265

266266
assert(count_source == count_changeFeed)
@@ -289,7 +289,7 @@ val readCfg = Map(
289289
)
290290

291291
val toBeDeleted_df = spark.read.format("cosmos.oltp").options(readCfg).load().limit(100000)
292-
println(s"Number of records to be deleted: ${toBeDeleted_df.count}")
292+
println(s"Number of records to be deleted: ${toBeDeleted_df.count}")
293293

294294
println(s"Starting to bulk delete documents: ${formatter.format(Instant.now)}")
295295
val deleteCfg = writeCfg + ("spark.cosmos.write.strategy" -> "ItemDelete")
@@ -306,7 +306,7 @@ val countCfg = readCfg + ("spark.cosmos.read.customQuery" -> "SELECT COUNT(0) AS
306306
val count_query_schema=StructType(Array(StructField("Count", LongType, true)))
307307
val query_df = spark.read.format("cosmos.oltp").schema(count_query_schema).options(countCfg).load()
308308
val count_query = query_df.agg(sum("Count").as("TotalCount")).first.getLong(0)
309-
println(s"Number of records retrieved via query: ${count_query}")
309+
println(s"Number of records retrieved via query: ${count_query}")
310310
println(s"Finished count validation via query: ${formatter.format(Instant.now)}")
311311

312312
assert (math.max(0, count_source - 100000) == count_query)
@@ -349,7 +349,7 @@ assert(df_Tables.count() == 3)
349349
// COMMAND ----------
350350

351351
// MAGIC %sql
352-
// MAGIC CREATE TABLE cosmosCatalog.SampleDatabase.GreenTaxiRecordsView
352+
// MAGIC CREATE TABLE cosmosCatalog.SampleDatabase.GreenTaxiRecordsView
353353
// MAGIC (id STRING, _ts TIMESTAMP, vendorID INT, totalAmount DOUBLE)
354354
// MAGIC USING cosmos.oltp
355355
// MAGIC TBLPROPERTIES(isCosmosView = 'True')
@@ -359,7 +359,7 @@ assert(df_Tables.count() == 3)
359359
// MAGIC spark.cosmos.read.inferSchema.enabled = 'False',
360360
// MAGIC spark.cosmos.read.inferSchema.includeSystemProperties = 'True',
361361
// MAGIC spark.cosmos.read.partitioning.strategy = 'Aggressive');
362-
// MAGIC
362+
// MAGIC
363363
// MAGIC SELECT * FROM cosmosCatalog.SampleDatabase.GreenTaxiRecordsView LIMIT 10
364364

365365
// COMMAND ----------
@@ -370,7 +370,7 @@ assert(df_Tables.count() == 3)
370370
// COMMAND ----------
371371

372372
// MAGIC %sql
373-
// MAGIC CREATE TABLE cosmosCatalog.SampleDatabase.GreenTaxiRecordsAnotherView
373+
// MAGIC CREATE TABLE cosmosCatalog.SampleDatabase.GreenTaxiRecordsAnotherView
374374
// MAGIC USING cosmos.oltp
375375
// MAGIC TBLPROPERTIES(isCosmosView = 'True')
376376
// MAGIC OPTIONS (
@@ -379,7 +379,7 @@ assert(df_Tables.count() == 3)
379379
// MAGIC spark.cosmos.read.inferSchema.enabled = 'True',
380380
// MAGIC spark.cosmos.read.inferSchema.includeSystemProperties = 'False',
381381
// MAGIC spark.cosmos.read.partitioning.strategy = 'Restrictive');
382-
// MAGIC
382+
// MAGIC
383383
// MAGIC SELECT * FROM cosmosCatalog.SampleDatabase.GreenTaxiRecordsAnotherView LIMIT 10
384384

385385
// COMMAND ----------

0 commit comments

Comments
 (0)