Skip to content

Commit b42a5b9

Browse files
authored
Update CosmosDBLiveSingleContainerMigration.scala (Azure#34145)
* Update CosmosDBLiveSingleContainerMigration.scala * Update README.md bump Spark Connector version * Update CosmosDBLiveSingleContainerMigration.scala
1 parent 75db6bd commit b42a5b9

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/DatabricksLiveContainerMigration/CosmosDBLiveSingleContainerMigration.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.accountKey", cosmos
2222
// MAGIC /* NOTE: It is important to enable TTL (can be off/-1 by default) on the throughput control container */
2323
// MAGIC CREATE TABLE IF NOT EXISTS cosmosCatalog.`database-v4`.ThroughputControl -- replace database-v4 with source database name - ThroughputControl table will be created there
2424
// MAGIC USING cosmos.oltp
25-
// MAGIC OPTIONS(spark.cosmos.database = 'database-v4') -- replace database-v4 with the name of your source database
25+
// MAGIC OPTIONS(spark.cosmos.database = 'database-v4') -- replace database-v4 with the name of your source database. Do NOT change value partitionKeyPath = '/groupId' below - it must be named '/groupId' for Throughput control feature to work
2626
// MAGIC TBLPROPERTIES(partitionKeyPath = '/groupId', autoScaleMaxThroughput = '4000', indexingPolicy = 'AllProperties', defaultTtlInSeconds = '-1');
2727
// MAGIC
2828
// MAGIC CREATE TABLE IF NOT EXISTS cosmosCatalog.`database-v4`.customer_v2 -- replace database-v4 with the name of your source database, and customer_v2 with what you want to name your target container - it will be created here

sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/DatabricksLiveContainerMigration/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Select the latest Azure Databricks runtime version which supports Spark 3.0 or h
1919

2020
## Install Azure Cosmos DB Spark Connector jar
2121

22-
* Install the Azure Cosmos DB Spark Connector jar on the cluster by providing maven co-ordinates `com.azure.cosmos.spark:azure-cosmos-spark_3-2_2-12:4.7.0`:
22+
* Install the Azure Cosmos DB Spark Connector jar on the cluster by providing maven co-ordinates `com.azure.cosmos.spark:azure-cosmos-spark_3-2_2-12:4.17.2`:
2323

2424
![image](./media/jar.jpg)
2525

0 commit comments

Comments
 (0)