Skip to content

Commit 52cbef1

Browse files
authored
Merge pull request #31 from data-integrations/feature/CDAP-16222-FixImports
CDAP-16222: Fix imports in dynamic spark plugin documentation/examples
2 parents 9180e11 + 57daa45 commit 52cbef1

File tree

4 files changed

+9
-9
lines changed

4 files changed

+9
-9
lines changed

docs/ScalaSparkCompute-sparkcompute.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,9 +47,9 @@ and produces records of two fields, ``'word'`` and ``'count'``.
4747

4848
The following imports are included automatically and are ready for the user code to use:
4949

50-
import co.cask.cdap.api.data.format._
51-
import co.cask.cdap.api.data.schema._;
52-
import co.cask.cdap.etl.api.batch._
50+
import io.cdap.cdap.api.data.format._
51+
import io.cdap.cdap.api.data.schema._;
52+
import io.cdap.cdap.etl.api.batch._
5353
import org.apache.spark._
5454
import org.apache.spark.api.java._
5555
import org.apache.spark.rdd._

docs/ScalaSparkProgram-sparkprogram.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,14 +13,14 @@ Properties
1313
**mainClass** The fully qualified class name for the Spark application.
1414
It must either be an ``object`` that has a ``main`` method define inside, with the method signature as
1515
``def main(args: Array[String]): Unit``; or it is a class that extends from the CDAP
16-
``co.cask.cdap.api.spark.SparkMain`` trait that implements the ``run`` method, with the method signature as
16+
``io.cdap.cdap.api.spark.SparkMain`` trait that implements the ``run`` method, with the method signature as
1717
``def run(implicit sec: SparkExecutionContext): Unit``
1818

1919
**scalaCode** The self-contained Spark application written in Scala.
2020
For example, an application that reads from CDAP stream with name ``streamName``,
2121
performs a simple word count logic and logs the result can be written as:
2222

23-
import co.cask.cdap.api.spark._
23+
import io.cdap.cdap.api.spark._
2424
import org.apache.spark._
2525
import org.slf4j._
2626

docs/ScalaSparkSink-sparksink.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -45,9 +45,9 @@ This will perform a word count on the input field ``'body'``, then write out the
4545

4646
The following imports are included automatically and are ready for the user code to use:
4747

48-
import co.cask.cdap.api.data.format._
49-
import co.cask.cdap.api.data.schema._;
50-
import co.cask.cdap.etl.api.batch._
48+
import io.cdap.cdap.api.data.format._
49+
import io.cdap.cdap.api.data.schema._;
50+
import io.cdap.cdap.etl.api.batch._
5151
import org.apache.spark._
5252
import org.apache.spark.api.java._
5353
import org.apache.spark.rdd._

widgets/ScalaSparkProgram-sparkprogram.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
"label": "Scala",
2020
"name": "scalaCode",
2121
"widget-attributes": {
22-
"default": "import co.cask.cdap.api.spark._\nimport org.apache.spark._\nimport org.slf4j._\n\nclass SparkProgram extends SparkMain {\n import SparkProgram._\n\n override def run(implicit sec: SparkExecutionContext): Unit = {\n LOG.info(\"Spark Program Started\")\n\n val sc = new SparkContext\n\n LOG.info(\"Spark Program Completed\")\n }\n}\n\nobject SparkProgram {\n val LOG = LoggerFactory.getLogger(getClass())\n}"
22+
"default": "import io.cdap.cdap.api.spark._\nimport org.apache.spark._\nimport org.slf4j._\n\nclass SparkProgram extends SparkMain {\n import SparkProgram._\n\n override def run(implicit sec: SparkExecutionContext): Unit = {\n LOG.info(\"Spark Program Started\")\n\n val sc = new SparkContext\n\n LOG.info(\"Spark Program Completed\")\n }\n}\n\nobject SparkProgram {\n val LOG = LoggerFactory.getLogger(getClass())\n}"
2323
}
2424
},
2525
{

0 commit comments

Comments
 (0)