Skip to content

Commit

Permalink
[SPARK-18615][DOCS] Switch to multi-line doc to avoid a genjavadoc bu…
Browse files Browse the repository at this point in the history
…g for backticks

## What changes were proposed in this pull request?

Currently, single line comment does not mark down backticks to `<code>..</code>` but prints as they are (`` `..` ``). For example, the line below:

```scala
/** Return an RDD with the pairs from `this` whose keys are not in `other`. */
```

So, we could work around this as below:

```scala
/**
 * Return an RDD with the pairs from `this` whose keys are not in `other`.
 */
```

- javadoc

  - **Before**
    ![2016-11-29 10 39 14](https://cloud.githubusercontent.com/assets/6477701/20693606/e64c8f90-b622-11e6-8dfc-4a029216e23d.png)

  - **After**
    ![2016-11-29 10 39 08](https://cloud.githubusercontent.com/assets/6477701/20693607/e7280d36-b622-11e6-8502-d2e21cd5556b.png)

- scaladoc (this one looks fine either way)

  - **Before**
    ![2016-11-29 10 38 22](https://cloud.githubusercontent.com/assets/6477701/20693640/12c18aa8-b623-11e6-901a-693e2f6f8066.png)

  - **After**
    ![2016-11-29 10 40 05](https://cloud.githubusercontent.com/assets/6477701/20693642/14eb043a-b623-11e6-82ac-7cd0000106d1.png)

I suspect this is related with SPARK-16153 and genjavadoc issue in ` lightbend/genjavadoc#85`.

## How was this patch tested?

I found them via

```
grep -r "\/\*\*.*\`" . | grep .scala
````

and then checked if each is in the public API documentation with manually built docs (`jekyll build`) with Java 7.

Author: hyukjinkwon <[email protected]>

Closes apache#16050 from HyukjinKwon/javadoc-markdown.
  • Loading branch information
HyukjinKwon authored and Robert Kruszewski committed Dec 2, 2016
1 parent df854c1 commit 470c3c8
Show file tree
Hide file tree
Showing 24 changed files with 129 additions and 43 deletions.
4 changes: 3 additions & 1 deletion core/src/main/scala/org/apache/spark/SparkConf.scala
Original file line number Diff line number Diff line change
Expand Up @@ -378,7 +378,9 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable with Logging with Seria
settings.entrySet().asScala.map(x => (x.getKey, x.getValue)).toArray
}

/** Get all parameters that start with `prefix` */
/**
* Get all parameters that start with `prefix`
*/
def getAllWithPrefix(prefix: String): Array[(String, String)] = {
getAll.filter { case (k, v) => k.startsWith(prefix) }
.map { case (k, v) => (k.substring(prefix.length), v) }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,9 @@ class JavaDoubleRDD(val srdd: RDD[scala.Double])

import JavaDoubleRDD.fromRDD

/** Persist this RDD with the default storage level (`MEMORY_ONLY`). */
/**
* Persist this RDD with the default storage level (`MEMORY_ONLY`).
*/
def cache(): JavaDoubleRDD = fromRDD(srdd.cache())

/**
Expand Down
12 changes: 9 additions & 3 deletions core/src/main/scala/org/apache/spark/api/java/JavaPairRDD.scala
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,9 @@ class JavaPairRDD[K, V](val rdd: RDD[(K, V)])

// Common RDD functions

/** Persist this RDD with the default storage level (`MEMORY_ONLY`). */
/**
* Persist this RDD with the default storage level (`MEMORY_ONLY`).
*/
def cache(): JavaPairRDD[K, V] = new JavaPairRDD[K, V](rdd.cache())

/**
Expand Down Expand Up @@ -454,13 +456,17 @@ class JavaPairRDD[K, V](val rdd: RDD[(K, V)])
fromRDD(rdd.subtractByKey(other))
}

/** Return an RDD with the pairs from `this` whose keys are not in `other`. */
/**
* Return an RDD with the pairs from `this` whose keys are not in `other`.
*/
def subtractByKey[W](other: JavaPairRDD[K, W], numPartitions: Int): JavaPairRDD[K, V] = {
implicit val ctag: ClassTag[W] = fakeClassTag
fromRDD(rdd.subtractByKey(other, numPartitions))
}

/** Return an RDD with the pairs from `this` whose keys are not in `other`. */
/**
* Return an RDD with the pairs from `this` whose keys are not in `other`.
*/
def subtractByKey[W](other: JavaPairRDD[K, W], p: Partitioner): JavaPairRDD[K, V] = {
implicit val ctag: ClassTag[W] = fakeClassTag
fromRDD(rdd.subtractByKey(other, p))
Expand Down
4 changes: 3 additions & 1 deletion core/src/main/scala/org/apache/spark/api/java/JavaRDD.scala
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,9 @@ class JavaRDD[T](val rdd: RDD[T])(implicit val classTag: ClassTag[T])

// Common RDD functions

/** Persist this RDD with the default storage level (`MEMORY_ONLY`). */
/**
* Persist this RDD with the default storage level (`MEMORY_ONLY`).
*/
def cache(): JavaRDD[T] = wrapRDD(rdd.cache())

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -914,14 +914,18 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
subtractByKey(other, self.partitioner.getOrElse(new HashPartitioner(self.partitions.length)))
}

/** Return an RDD with the pairs from `this` whose keys are not in `other`. */
/**
* Return an RDD with the pairs from `this` whose keys are not in `other`.
*/
def subtractByKey[W: ClassTag](
other: RDD[(K, W)],
numPartitions: Int): RDD[(K, V)] = self.withScope {
subtractByKey(other, new HashPartitioner(numPartitions))
}

/** Return an RDD with the pairs from `this` whose keys are not in `other`. */
/**
* Return an RDD with the pairs from `this` whose keys are not in `other`.
*/
def subtractByKey[W: ClassTag](other: RDD[(K, W)], p: Partitioner): RDD[(K, V)] = self.withScope {
new SubtractedRDD[K, V, W](self, other, p)
}
Expand Down
8 changes: 6 additions & 2 deletions core/src/main/scala/org/apache/spark/rdd/RDD.scala
Original file line number Diff line number Diff line change
Expand Up @@ -195,10 +195,14 @@ abstract class RDD[T: ClassTag](
}
}

/** Persist this RDD with the default storage level (`MEMORY_ONLY`). */
/**
* Persist this RDD with the default storage level (`MEMORY_ONLY`).
*/
def persist(): this.type = persist(StorageLevel.MEMORY_ONLY)

/** Persist this RDD with the default storage level (`MEMORY_ONLY`). */
/**
* Persist this RDD with the default storage level (`MEMORY_ONLY`).
*/
def cache(): this.type = persist()

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,9 @@ class EdgeRDDImpl[ED: ClassTag, VD: ClassTag] private[graphx] (
this
}

/** Persists the edge partitions using `targetStorageLevel`, which defaults to MEMORY_ONLY. */
/**
* Persists the edge partitions using `targetStorageLevel`, which defaults to MEMORY_ONLY.
*/
override def cache(): this.type = {
partitionsRDD.persist(targetStorageLevel)
this
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,9 @@ class GraphImpl[VD: ClassTag, ED: ClassTag] protected (

object GraphImpl {

/** Create a graph from edges, setting referenced vertices to `defaultVertexAttr`. */
/**
* Create a graph from edges, setting referenced vertices to `defaultVertexAttr`.
*/
def apply[VD: ClassTag, ED: ClassTag](
edges: RDD[Edge[ED]],
defaultVertexAttr: VD,
Expand All @@ -286,7 +288,9 @@ object GraphImpl {
fromEdgeRDD(EdgeRDD.fromEdges(edges), defaultVertexAttr, edgeStorageLevel, vertexStorageLevel)
}

/** Create a graph from EdgePartitions, setting referenced vertices to `defaultVertexAttr`. */
/**
* Create a graph from EdgePartitions, setting referenced vertices to `defaultVertexAttr`.
*/
def fromEdgePartitions[VD: ClassTag, ED: ClassTag](
edgePartitions: RDD[(PartitionID, EdgePartition[ED, VD])],
defaultVertexAttr: VD,
Expand All @@ -296,7 +300,9 @@ object GraphImpl {
vertexStorageLevel)
}

/** Create a graph from vertices and edges, setting missing vertices to `defaultVertexAttr`. */
/**
* Create a graph from vertices and edges, setting missing vertices to `defaultVertexAttr`.
*/
def apply[VD: ClassTag, ED: ClassTag](
vertices: RDD[(VertexId, VD)],
edges: RDD[Edge[ED]],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,9 @@ class VertexRDDImpl[VD] private[graphx] (
this
}

/** Persists the vertex partitions at `targetStorageLevel`, which defaults to MEMORY_ONLY. */
/**
* Persists the vertex partitions at `targetStorageLevel`, which defaults to MEMORY_ONLY.
*/
override def cache(): this.type = {
partitionsRDD.persist(targetStorageLevel)
this
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -85,25 +85,33 @@ sealed trait Matrix extends Serializable {
@Since("2.0.0")
def copy: Matrix

/** Transpose the Matrix. Returns a new `Matrix` instance sharing the same underlying data. */
/**
* Transpose the Matrix. Returns a new `Matrix` instance sharing the same underlying data.
*/
@Since("2.0.0")
def transpose: Matrix

/** Convenience method for `Matrix`-`DenseMatrix` multiplication. */
/**
* Convenience method for `Matrix`-`DenseMatrix` multiplication.
*/
@Since("2.0.0")
def multiply(y: DenseMatrix): DenseMatrix = {
val C: DenseMatrix = DenseMatrix.zeros(numRows, y.numCols)
BLAS.gemm(1.0, this, y, 0.0, C)
C
}

/** Convenience method for `Matrix`-`DenseVector` multiplication. For binary compatibility. */
/**
* Convenience method for `Matrix`-`DenseVector` multiplication. For binary compatibility.
*/
@Since("2.0.0")
def multiply(y: DenseVector): DenseVector = {
multiply(y.asInstanceOf[Vector])
}

/** Convenience method for `Matrix`-`Vector` multiplication. */
/**
* Convenience method for `Matrix`-`Vector` multiplication.
*/
@Since("2.0.0")
def multiply(y: Vector): DenseVector = {
val output = new DenseVector(new Array[Double](numRows))
Expand Down
4 changes: 3 additions & 1 deletion mllib/src/main/scala/org/apache/spark/ml/Pipeline.scala
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,9 @@ object Pipeline extends MLReadable[Pipeline] {
}
}

/** Methods for `MLReader` and `MLWriter` shared between [[Pipeline]] and [[PipelineModel]] */
/**
* Methods for `MLReader` and `MLWriter` shared between [[Pipeline]] and [[PipelineModel]]
*/
private[ml] object SharedReadWrite {

import org.json4s.JsonDSL._
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,9 @@ object AttributeGroup {
}
}

/** Creates an attribute group from a `StructField` instance. */
/**
* Creates an attribute group from a `StructField` instance.
*/
def fromStructField(field: StructField): AttributeGroup = {
require(field.dataType == new VectorUDT)
if (field.metadata.contains(ML_ATTR)) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,9 @@ sealed abstract class Attribute extends Serializable {
StructField(name.get, DoubleType, nullable = false, newMetadata)
}

/** Converts to a `StructField`. */
/**
* Converts to a `StructField`.
*/
def toStructField(): StructField = toStructField(Metadata.empty)

override def toString: String = toMetadataImpl(withType = true).toString
Expand Down Expand Up @@ -369,12 +371,16 @@ class NominalAttribute private[ml] (
override def withIndex(index: Int): NominalAttribute = copy(index = Some(index))
override def withoutIndex: NominalAttribute = copy(index = None)

/** Copy with new values and empty `numValues`. */
/**
* Copy with new values and empty `numValues`.
*/
def withValues(values: Array[String]): NominalAttribute = {
copy(numValues = None, values = Some(values))
}

/** Copy with new values and empty `numValues`. */
/**
* Copy with new values and empty `numValues`.
*/
@varargs
def withValues(first: String, others: String*): NominalAttribute = {
copy(numValues = None, values = Some((first +: others).toArray))
Expand All @@ -385,12 +391,16 @@ class NominalAttribute private[ml] (
copy(values = None)
}

/** Copy with a new `numValues` and empty `values`. */
/**
* Copy with a new `numValues` and empty `values`.
*/
def withNumValues(numValues: Int): NominalAttribute = {
copy(numValues = Some(numValues), values = None)
}

/** Copy without the `numValues`. */
/**
* Copy without the `numValues`.
*/
def withoutNumValues: NominalAttribute = copy(numValues = None)

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1105,7 +1105,9 @@ sealed trait LogisticRegressionTrainingSummary extends LogisticRegressionSummary
*/
sealed trait LogisticRegressionSummary extends Serializable {

/** Dataframe output by the model's `transform` method. */
/**
* Dataframe output by the model's `transform` method.
*/
def predictions: DataFrame

/** Field in "predictions" which gives the probability of each class as a vector. */
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -886,7 +886,9 @@ class GeneralizedLinearRegressionSummary private[regression] (
protected val model: GeneralizedLinearRegressionModel =
origModel.copy(ParamMap.empty).setPredictionCol(predictionCol)

/** Predictions output by the model's `transform` method. */
/**
* Predictions output by the model's `transform` method.
*/
@Since("2.0.0") @transient val predictions: DataFrame = model.transform(dataset)

private[regression] lazy val family: Family = Family.fromName(model.getFamily)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -255,10 +255,14 @@ class ChiSqSelector @Since("2.1.0") () extends Serializable {

private[spark] object ChiSqSelector {

/** String name for `numTopFeatures` selector type. */
/**
* String name for `numTopFeatures` selector type.
*/
val NumTopFeatures: String = "numTopFeatures"

/** String name for `percentile` selector type. */
/**
* String name for `percentile` selector type.
*/
val Percentile: String = "percentile"

/** String name for `fpr` selector type. */
Expand Down
16 changes: 12 additions & 4 deletions mllib/src/main/scala/org/apache/spark/mllib/linalg/Matrices.scala
Original file line number Diff line number Diff line change
Expand Up @@ -91,25 +91,33 @@ sealed trait Matrix extends Serializable {
@Since("1.2.0")
def copy: Matrix

/** Transpose the Matrix. Returns a new `Matrix` instance sharing the same underlying data. */
/**
* Transpose the Matrix. Returns a new `Matrix` instance sharing the same underlying data.
*/
@Since("1.3.0")
def transpose: Matrix

/** Convenience method for `Matrix`-`DenseMatrix` multiplication. */
/**
* Convenience method for `Matrix`-`DenseMatrix` multiplication.
*/
@Since("1.2.0")
def multiply(y: DenseMatrix): DenseMatrix = {
val C: DenseMatrix = DenseMatrix.zeros(numRows, y.numCols)
BLAS.gemm(1.0, this, y, 0.0, C)
C
}

/** Convenience method for `Matrix`-`DenseVector` multiplication. For binary compatibility. */
/**
* Convenience method for `Matrix`-`DenseVector` multiplication. For binary compatibility.
*/
@Since("1.2.0")
def multiply(y: DenseVector): DenseVector = {
multiply(y.asInstanceOf[Vector])
}

/** Convenience method for `Matrix`-`Vector` multiplication. */
/**
* Convenience method for `Matrix`-`Vector` multiplication.
*/
@Since("1.4.0")
def multiply(y: Vector): DenseVector = {
val output = new DenseVector(new Array[Double](numRows))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -295,7 +295,9 @@ class BlockMatrix @Since("1.3.0") (
new IndexedRowMatrix(rows)
}

/** Collect the distributed matrix on the driver as a `DenseMatrix`. */
/**
* Collect the distributed matrix on the driver as a `DenseMatrix`.
*/
@Since("1.3.0")
def toLocalMatrix(): Matrix = {
require(numRows() < Int.MaxValue, "The number of rows of this matrix should be less than " +
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,9 @@ class CoordinateMatrix @Since("1.0.0") (
toIndexedRowMatrix().toRowMatrix()
}

/** Converts to BlockMatrix. Creates blocks of `SparseMatrix` with size 1024 x 1024. */
/**
* Converts to BlockMatrix. Creates blocks of `SparseMatrix` with size 1024 x 1024.
*/
@Since("1.3.0")
def toBlockMatrix(): BlockMatrix = {
toBlockMatrix(1024, 1024)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,9 @@ class IndexedRowMatrix @Since("1.0.0") (
new RowMatrix(rows.map(_.vector), 0L, nCols)
}

/** Converts to BlockMatrix. Creates blocks of `SparseMatrix` with size 1024 x 1024. */
/**
* Converts to BlockMatrix. Creates blocks of `SparseMatrix` with size 1024 x 1024.
*/
@Since("1.3.0")
def toBlockMatrix(): BlockMatrix = {
toBlockMatrix(1024, 1024)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,9 @@ object Statistics {
ChiSqTest.chiSquaredFeatures(data)
}

/** Java-friendly version of `chiSqTest()` */
/**
* Java-friendly version of `chiSqTest()`
*/
@Since("1.5.0")
def chiSqTest(data: JavaRDD[LabeledPoint]): Array[ChiSqTestResult] = chiSqTest(data.rdd)

Expand Down Expand Up @@ -218,7 +220,9 @@ object Statistics {
KolmogorovSmirnovTest.testOneSample(data, distName, params: _*)
}

/** Java-friendly version of `kolmogorovSmirnovTest()` */
/**
* Java-friendly version of `kolmogorovSmirnovTest()`
*/
@Since("1.5.0")
@varargs
def kolmogorovSmirnovTest(
Expand Down
Loading

0 comments on commit 470c3c8

Please sign in to comment.