Skip to content

Commit

Permalink
deploy: f0e4508
Browse files Browse the repository at this point in the history
  • Loading branch information
twoentartian committed Sep 9, 2024
1 parent fac8ca2 commit 977ccd7
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
4 changes: 2 additions & 2 deletions getting-started/apache-spark/dataframe-and-dataset.html
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ <h3 id="dataframe-and-dataset"><a class="header" href="#dataframe-and-dataset">D
is where DataFrames originate from. Spark has an optimized SQL query engine that
can optimize the compute path as well as provide a more efficient representation
of the rows when given a schema. From the
<a href="https://spark.apache.org/docs/3.1.2/sql-programming-guide.html#overview">Spark SQL, DataFrames and Datasets
<a href="https://spark.apache.org/docs/latest/sql-programming-guide.html#overview">Spark SQL, DataFrames and Datasets
Guide</a>:</p>
<blockquote>
<p>Spark SQL is a Spark module for structured data processing. Unlike the basic
Expand Down Expand Up @@ -219,7 +219,7 @@ <h3 id="dataframe-and-dataset"><a class="header" href="#dataframe-and-dataset">D
StructField(numD,DoubleType,false), StructField(numE,LongType,false),
StructField(numF,DoubleType,false))
</code></pre>
<p>An overview of the different <a href="https://spark.apache.org/docs/3.1.2/api/scala/org/apache/spark/sql/types/index.html">Spark SQL types</a>
<p>An overview of the different <a href="https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/types/index.html">Spark SQL types</a>
can be found online. For the timestamp field we need to specify the format
according to the <a href="https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html">Javadate format</a>
—in our case <code>MM/dd/yy:hh:mm</code>. Tying this all together we can build a Dataframe
Expand Down
4 changes: 2 additions & 2 deletions print.html
Original file line number Diff line number Diff line change
Expand Up @@ -727,7 +727,7 @@ <h3 id="setting-up-spark-in-docker"><a class="header" href="#setting-up-spark-in
is where DataFrames originate from. Spark has an optimized SQL query engine that
can optimize the compute path as well as provide a more efficient representation
of the rows when given a schema. From the
<a href="https://spark.apache.org/docs/3.1.2/sql-programming-guide.html#overview">Spark SQL, DataFrames and Datasets
<a href="https://spark.apache.org/docs/latest/sql-programming-guide.html#overview">Spark SQL, DataFrames and Datasets
Guide</a>:</p>
<blockquote>
<p>Spark SQL is a Spark module for structured data processing. Unlike the basic
Expand Down Expand Up @@ -794,7 +794,7 @@ <h3 id="setting-up-spark-in-docker"><a class="header" href="#setting-up-spark-in
StructField(numD,DoubleType,false), StructField(numE,LongType,false),
StructField(numF,DoubleType,false))
</code></pre>
<p>An overview of the different <a href="https://spark.apache.org/docs/3.1.2/api/scala/org/apache/spark/sql/types/index.html">Spark SQL types</a>
<p>An overview of the different <a href="https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/types/index.html">Spark SQL types</a>
can be found online. For the timestamp field we need to specify the format
according to the <a href="https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html">Javadate format</a>
—in our case <code>MM/dd/yy:hh:mm</code>. Tying this all together we can build a Dataframe
Expand Down

0 comments on commit 977ccd7

Please sign in to comment.