Package-level declarations
Types
Filters the values of the iterator lazily using predicate.
This wrapper over SparkSession which provides several additional methods to create org.apache.spark.sql.Dataset.
This wrapper over SparkSession and JavaStreamingContext provides several additional methods to create org.apache.spark.sql.Dataset
Maps the values of the iterator lazily using func.
Typed and named wrapper around SparkUserDefinedFunction with defined encoder.
Instance of a UDF with 0 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 1 argument with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 10 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 11 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 12 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 13 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 14 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 15 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 16 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 17 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 18 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 19 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 2 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 20 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 21 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 22 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 3 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 4 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 5 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 6 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 7 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 8 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 9 arguments with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with vararg arguments of the same type with name. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Partitions the values of the iterator lazily in groups of size.
Log levels for spark.
The entry point to programming Spark with the Dataset and DataFrame API.
An exception thrown when the UDF is generated with illegal types for the parameters
Kotlin wrapper around UDF interface to ensure nullability in types.
A wrapper for a UDF with 0 arguments.
A wrapper for a UDF with 1 arguments.
A wrapper for a UDF with 10 arguments.
A wrapper for a UDF with 11 arguments.
A wrapper for a UDF with 12 arguments.
A wrapper for a UDF with 13 arguments.
A wrapper for a UDF with 14 arguments.
A wrapper for a UDF with 15 arguments.
A wrapper for a UDF with 16 arguments.
A wrapper for a UDF with 17 arguments.
A wrapper for a UDF with 18 arguments.
A wrapper for a UDF with 19 arguments.
A wrapper for a UDF with 2 arguments.
A wrapper for a UDF with 20 arguments.
A wrapper for a UDF with 21 arguments.
A wrapper for a UDF with 22 arguments.
A wrapper for a UDF with 3 arguments.
A wrapper for a UDF with 4 arguments.
A wrapper for a UDF with 5 arguments.
A wrapper for a UDF with 6 arguments.
A wrapper for a UDF with 7 arguments.
A wrapper for a UDF with 8 arguments.
A wrapper for a UDF with 9 arguments.
Typed wrapper around SparkUserDefinedFunction with defined encoder.
Instance of a UDF with 0 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 1 argument. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 10 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 11 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 12 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 13 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 14 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 15 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 16 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 17 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 18 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 19 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 2 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 20 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 21 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 22 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 3 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 4 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 5 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 6 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 7 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 8 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with 9 arguments. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Instance of a UDF with vararg arguments of the same type. This UDF can be invoked with (typed) columns in a Dataset.select or selectTyped call. Alternatively it can be registered for SQL calls using register.
Functions
Aggregate the values of each key, using given combine functions and a neutral "zero value". This function can return a different result type, U, than the type of the values in this RDD, V. Thus, we need one operation for merging a V into a U and one operation for merging two U's, as in scala.TraversableOnce. The former operation is used for merging values within a partition, and the latter is used for merging values between partitions. To avoid memory allocation, both of these functions are allowed to modify and return their first argument instead of creating a new U.
Creates an Aggregator in functional manner.
Provides a type hint about the expected return value of this column. This information can be used by operations such as select
on a Dataset to automatically convert the results into the correct JVM types.
(Kotlin-specific) Returns a new Dataset where each record has been mapped on to the specified type. The method used to map columns depend on the type of R:
Broadcast a read-only variable to the cluster, returning a org.apache.spark.broadcast.Broadcast object for reading it in distributed functions. The variable will be sent to each cluster only once.
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
(Kotlin-specific) Applies the given function to each cogrouped data. For each unique group, the function will be passed the grouping key and 2 iterators containing all elements in the group from Dataset and other. The function can return an iterator containing elements of an arbitrary type which will be returned as a new Dataset.
Return a new DStream by applying 'cogroup' between RDDs of this
DStream and other
DStream. Hash partitioning is used to generate the RDDs with numPartitions
partitions.
Return a new DStream by applying 'cogroup' between RDDs of this
DStream and other
DStream. The supplied org.apache.spark.Partitioner is used to partition the generated RDDs.
Returns a TypedColumn based on the given column name and type DsType.
Returns a Column based on the given column name.
Returns a Column based on the given class attribute, not connected to a dataset.
Selects column based on the column name and returns it as a TypedColumn.
Helper function to quickly get a TypedColumn (or Column) from a dataset in a refactor-safe manner.
Simplified version of combineByKeyWithClassTag that hash-partitions the resulting RDD using the existing partitioner/parallelism level. This method is here for backward compatibility. It does not provide combiner classtag information to the shuffle.
Simplified version of combineByKeyWithClassTag that hash-partitions the output RDD. This method is here for backward compatibility. It does not provide combiner classtag information to the shuffle.
Combine elements of each key in DStream's RDDs using custom functions. This is similar to the combineByKey for RDDs. Please refer to combineByKey in org.apache.spark.rdd.PairRDDFunctions in the Spark core documentation for more information.
Generic function to combine the elements for each key using a custom set of aggregation functions. This method is here for backward compatibility. It does not provide combiner classtag information to the shuffle.
Return approximate number of distinct values for each key in this RDD.
It's hard to call Dataset.debugCodegen
from kotlin, so here is utility for that
Utility method to create dataframe from *array or vararg arguments
Utility method to create dataframe from *array or vararg arguments with given column names
Returns a new mutable Seq with the given elements.
Return a RDD containing only the elements in the range range. If the RDD has been partitioned using a RangePartitioner, then this operation can be performed efficiently by only scanning the partitions that might contain matching elements. Otherwise, a standard filter is applied to all partitions.
Return a RDD containing only the elements in the inclusive range lower to upper. If the RDD has been partitioned using a RangePartitioner, then this operation can be performed efficiently by only scanning the partitions that might contain matching elements. Otherwise, a standard filter is applied to all partitions.
Return a RDD containing only the elements in the inclusive range range. If the RDD has been partitioned using a RangePartitioner, then this operation can be performed efficiently by only scanning the partitions that might contain matching elements. Otherwise, a standard filter is applied to all partitions.
(Kotlin-specific) Filters rows to eliminate null values.
(Kotlin-specific) Applies the given function to each group of data. For each unique group, the function will be passed the group key and an iterator that contains all the elements in the group. The function can return an iterator containing elements of an arbitrary type which will be returned as a new Dataset.
(Kotlin-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state. The result Dataset will represent the objects returned by the function. For a static batch Dataset, the function will be invoked once per group. For a streaming Dataset, the function will be invoked for each group repeatedly in every trigger, and updates to each group's state will be saved across invocations. See GroupState for more details.
Pass each value in the key-value pair RDD through a flatMap function without changing the keys; this also retains the original RDD's partitioning.
Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.
Flattens iterator.
(Kotlin-specific) Returns a new Dataset by flattening. This means that a Dataset of an iterable such as listOf(listOf(1, 2, 3), listOf(4, 5, 6))
will be flattened to a Dataset of listOf(1, 2, 3, 4, 5, 6)
.
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., emptyList for list concatenation, 0 for addition, or 1 for multiplication.).
Perform a full outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for w in other, or the pair (k, (Some(v), None)) if no elements in other have key k. Similarly, for each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for v in this, or the pair (k, (None, Some(w))) if no elements in this have key k. Hash-partitions the resulting RDD using the existing partitioner/ parallelism level.
Perform a full outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for w in other, or the pair (k, (Some(v), None)) if no elements in other have key k. Similarly, for each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for v in this, or the pair (k, (None, Some(w))) if no elements in this have key k. Hash-partitions the resulting RDD into the given number of partitions.
Perform a full outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for w in other, or the pair (k, (Some(v), None)) if no elements in other have key k. Similarly, for each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for v in this, or the pair (k, (None, Some(w))) if no elements in this have key k. Uses the given Partitioner to partition the output RDD.
Return a new DStream by applying 'full outer join' between RDDs of this
DStream and other
DStream. Hash partitioning is used to generate the RDDs with numPartitions
partitions.
Return a new DStream by applying 'full outer join' between RDDs of this
DStream and other
DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.
Converts Optional to Kotlin nullable.
(Kotlin-specific) Returns the group state value if it exists, else null
. This is comparable to GroupState.getOption, but instead utilises Kotlin's nullability features to get the same result.
Returns state value if it exists, else null
.
Converts Scala Option to Kotlin nullable.
Group the values for each key in the RDD into a single sequence. Hash-partitions the resulting RDD with the existing partitioner/parallelism level. The ordering of elements within each group is not guaranteed, and may even differ each time the resulting RDD is evaluated.
Group the values for each key in the RDD into a single sequence. Hash-partitions the resulting RDD with into numPartitions partitions. The ordering of elements within each group is not guaranteed, and may even differ each time the resulting RDD is evaluated.
Group the values for each key in the RDD into a single sequence. Allows controlling the partitioning of the resulting key-value pair RDD by passing a Partitioner. The ordering of elements within each group is not guaranteed, and may even differ each time the resulting RDD is evaluated.
(Kotlin-specific) Returns a KeyValueGroupedDataset where the data is grouped by the given key func.
Return a new DStream by applying groupByKey
to each RDD. Hash partitioning is used to generate the RDDs with numPartitions
partitions.
Return a new DStream by applying groupByKey
on each RDD. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.
Return a new DStream by applying groupByKey
over a sliding window on this
DStream. Similar to DStream.groupByKey()
, but applies it over a sliding window. Hash partitioning is used to generate the RDDs with numPartitions
partitions.
Create a new DStream by applying groupByKey
over a sliding window on this
DStream. Similar to DStream.groupByKey()
, but applies it over a sliding window.
Alias for cogroup.
Compute a histogram of the data using bucketCount number of buckets evenly spaced between the minimum and maximum of the RDD. For example if the min value is 0 and the max is 100 and there are two buckets the resulting buckets will be [0, 50)
[50, 100]
. bucketCount must be at least 1 If the RDD contains infinity, NaN throws an exception If the elements in RDD do not vary (max == min) always returns a single bucket.
Compute a histogram using the provided buckets. The buckets are all open to the right except for the last which is closed. e.g. for the array [1, 10, 20, 50]
the buckets are [1, 10) [10, 20) [20, 50]
e.g. <=x<10, 10<=x<20, 20<=x<=50
And on the input of 1 and 50 we would have a histogram of 1, 0, 1
True if the current column is in the given range.
Selects column based on the column name and returns it as a Column.
Selects column based on the column name and returns it as a TypedColumn.
Helper function to quickly get a TypedColumn (or Column) from a dataset in a refactor-safe manner.
Return an RDD containing all pairs of elements with matching keys in this and other. Each pair of elements will be returned as a (k, (v1, v2)) tuple, where (k, v1) is in this and (k, v2) is in other. Performs a hash join across the cluster.
Return an RDD containing all pairs of elements with matching keys in this and other. Each pair of elements will be returned as a (k, (v1, v2)) tuple, where (k, v1) is in this and (k, v2) is in other. Uses the given Partitioner to partition the output RDD.
Return a new DStream by applying 'join' between RDDs of this
DStream and other
DStream. Hash partitioning is used to generate the RDDs with numPartitions
partitions.
Return a new DStream by applying 'join' between RDDs of this
DStream and other
DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.
Perform a left outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (v, Some(w))) for w in other, or the pair (k, (v, None)) if no elements in other have key k. Hash-partitions the output using the existing partitioner/parallelism level.
Perform a left outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (v, Some(w))) for w in other, or the pair (k, (v, None)) if no elements in other have key k. Hash-partitions the output into numPartitions partitions.
Perform a left outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (v, Some(w))) for w in other, or the pair (k, (v, None)) if no elements in other have key k. Uses the given Partitioner to partition the output RDD.
Return a new DStream by applying 'left outer join' between RDDs of this
DStream and other
DStream. Hash partitioning is used to generate the RDDs with numPartitions
partitions.
Return a new DStream by applying 'left outer join' between RDDs of this
DStream and other
DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.
(Kotlin-specific) Applies the given function to each group of data. For each unique group, the function will be passed the group key and an iterator that contains all the elements in the group. The function can return an element of arbitrary type which will be returned as a new Dataset.
(Kotlin-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state. The result Dataset will represent the objects returned by the function. For a static batch Dataset, the function will be invoked once per group. For a streaming Dataset, the function will be invoked for each group repeatedly in every trigger, and updates to each group's state will be saved across invocations. See org.apache.spark.sql.streaming.GroupState for more details.
Pass each value in the key-value pair RDD through a map function without changing the keys; this also retains the original RDD's partitioning.
Returns a new KeyValueGroupedDataset where the given function func has been applied to the data. The grouping key is unchanged by this.
Return a new DStream by applying a map function to the value of each key-value pairs in 'this' DStream without changing the key.
Return a JavaMapWithStateDStream by applying a function to every key-value element of this
stream, while maintaining some state data for each unique key. The mapping function and other specification (e.g. partitioners, timeouts, initial state data, etc.) of this transformation can be specified using StateSpec
class. The state data is accessible in as a parameter of type State
in the mapping function.
Returns the maximum element from this RDD as defined by the specified Comparator.
Returns the minimum element from this RDD as defined by the specified Comparator.
Returns a new mutable Seq with the given elements.
Compute the population variance of this RDD's elements.
Merge the values for each key using an associative and commutative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce. Output will be hash-partitioned with the existing partitioner/ parallelism level.
Merge the values for each key using an associative and commutative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce. Output will be hash-partitioned with numPartitions partitions.
Merge the values for each key using an associative and commutative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce.
Return a new DStream by applying reduceByKey
to each RDD. The values for each key are merged using the supplied reduce function. Hash partitioning is used to generate the RDDs with numPartitions
partitions.
Return a new DStream by applying reduceByKey
to each RDD. The values for each key are merged using the supplied reduce function. org.apache.spark.Partitioner is used to control the partitioning of each RDD.
Return a new DStream by applying reduceByKey
over a sliding window. This is similar to DStream.reduceByKey()
but applies it over a sliding window. Hash partitioning is used to generate the RDDs with numPartitions
partitions.
Return a new DStream by applying reduceByKey
over a sliding window. Similar to DStream.reduceByKey()
, but applies it over a sliding window.
Return a new DStream by applying incremental reduceByKey
over a sliding window. The reduced value of over a new window is calculated using the old window's reduced value :
Merge the values for each key using an associative and commutative reduce function, but return the results immediately to the master as a Map. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce.
(Kotlin-specific) Reduces the elements of each group of data using the specified binary function. The given function must be commutative and associative or the result may be non-deterministic.
Registers a user-defined function (UDF) with name, for a UDF that's already defined using the Dataset API (i.e. of type NamedUserDefinedFunction).
Creates and registers a UDF (NamedUserDefinedFunction0) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction10) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction11) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction12) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction13) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction14) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction15) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction16) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction17) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction18) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction19) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction1) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a vararg UDF (NamedUserDefinedFunctionVararg) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction20) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction21) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction22) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction2) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction3) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction4) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction5) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction6) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction7) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction8) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction9) from a function reference adapting its name by reflection. For example: val myUdf = udf.register(::myFunction)
Registers agg as a UDAF for SQL. Returns the UDAF as NamedUserDefinedFunction. Obtains a NamedUserDefinedFunction1 that wraps the given agg so that it may be used with Data Frames.
Defines and registers a named UDF (NamedUserDefinedFunction0) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction10) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction11) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction12) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction13) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction14) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction15) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction16) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction17) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction18) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction19) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction1) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1 -> ... }
Defines and registers a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf.register("myUdf") { t1: Array<T> -> ... }
Defines and registers a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf.register("myUdf") { t1: BooleanArray -> ... }
Defines and registers a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf.register("myUdf") { t1: ByteArray -> ... }
Defines and registers a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf.register("myUdf") { t1: DoubleArray -> ... }
Defines and registers a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf.register("myUdf") { t1: FloatArray -> ... }
Defines and registers a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf.register("myUdf") { t1: IntArray -> ... }
Defines and registers a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf.register("myUdf") { t1: LongArray -> ... }
Defines and registers a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf.register("myUdf") { t1: ShortArray -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction20) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19, t20: T20 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction21) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19, t20: T20, t21: T21 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction22) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19, t20: T20, t21: T21, t22: T22 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction2) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction3) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction4) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction5) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction6) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction7) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction8) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8 -> ... }
Defines and registers a named UDF (NamedUserDefinedFunction9) instance based on the (lambda) function func. For example: val myUdf = udf.register("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9 -> ... }
Creates and registers a UDF (NamedUserDefinedFunction0) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction10) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction11) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction12) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction13) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction14) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction15) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction16) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction17) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction18) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction19) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction1) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a vararg UDF (NamedUserDefinedFunctionVararg) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction20) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction21) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction22) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction2) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction3) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction4) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction5) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction6) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction7) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction8) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Creates and registers a UDF (NamedUserDefinedFunction9) from a function reference. For example: val myUdf = udf.register("myFunction", ::myFunction)
Registers a UDAF for SQL based on the given arguments. Returns the UDAF as NamedUserDefinedFunction. Obtains a NamedUserDefinedFunction1 that wraps the given agg so that it may be used with Data Frames.
Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys.
Perform a right outer join of this and other. For each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), w)) for v in this, or the pair (k, (None, w)) if no elements in this have key k. Hash-partitions the resulting RDD using the existing partitioner/parallelism level.
Perform a right outer join of this and other. For each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), w)) for v in this, or the pair (k, (None, w)) if no elements in this have key k. Hash-partitions the resulting RDD into the given number of partitions.
Perform a right outer join of this and other. For each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), w)) for v in this, or the pair (k, (None, w)) if no elements in this have key k. Uses the given Partitioner to partition the output RDD.
Return a new DStream by applying 'right outer join' between RDDs of this
DStream and other
DStream. Hash partitioning is used to generate the RDDs with numPartitions
partitions.
Return a new DStream by applying 'right outer join' between RDDs of this
DStream and other
DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.
Return a subset of this RDD sampled by key (via stratified sampling) containing exactly math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).
Compute the sample standard deviation of this RDD's elements (which corrects for bias in estimating the standard deviation by dividing by N-1 instead of N).
Compute the sample variance of this RDD's elements (which corrects for bias in estimating the variance by dividing by N-1 instead of N).
Output the RDD to any Hadoop-supported storage system, using a Hadoop JobConf object for that storage system. The JobConf should set an OutputFormat and any output paths required (e.g. a table name to write to) in the same way as it would be configured for a Hadoop MapReduce job.
Output the RDD to any Hadoop-supported file system.
Output the RDD to any Hadoop-supported file system, compressing with the supplied codec.
Output the RDD to any Hadoop-supported storage system, using a Configuration object for that storage system.
Output the RDD to any Hadoop-supported file system.
Returns a new Dataset by computing the given Column expressions for each element.
Control our logLevel. This overrides any user-defined log settings.
Alias for Dataset.sort which forces user to provide sorted columns from the source dataset
Allows to sort data class dataset on one or more of the properties of the data class.
Returns a dataset sorted by the first (first
) value of each Pair inside.
Returns a dataset sorted by the first (_1
) value of each Arity2 inside.
Returns a dataset sorted by the first (_1
) value of each Tuple2 inside.
Sort the RDD by key, so that each partition contains a sorted range of the elements. Calling JavaRDD.collect or save
on the resulting RDD will return or output an ordered list of records (in the save
case, they will be written to multiple part-X
files in the filesystem, in order of the keys).
Returns a dataset sorted by the second (second
) value of each Pair inside.
Returns a dataset sorted by the second (_2
) value of each Arity2 inside.
Returns a dataset sorted by the second (_2
) value of each Tuple2 inside.
(Kotlin-specific) Maps the Dataset to only retain the "keys" or Pair.first values.
(Kotlin-specific) Maps the Dataset to only retain the "keys" or Arity2._1 values.
(Kotlin-specific) Maps the Dataset to only retain the "keys" or Tuple2._1 values.
(Kotlin-specific) Maps the Dataset to only retain the "values" or Pair.second values.
(Kotlin-specific) Maps the Dataset to only retain the "values" or Arity2._2 values.
(Kotlin-specific) Maps the Dataset to only retain the "values" or Tuple2._2 values.
Returns a new Arity10 based on this Tuple10.
Returns a new Arity11 based on this Tuple11.
Returns a new Arity12 based on this Tuple12.
Returns a new Arity13 based on this Tuple13.
Returns a new Arity14 based on this Tuple14.
Returns a new Arity15 based on this Tuple15.
Returns a new Arity16 based on this Tuple16.
Returns a new Arity17 based on this Tuple17.
Returns a new Arity18 based on this Tuple18.
Returns a new Arity19 based on this Tuple19.
Returns a new Arity1 based on this Tuple1.
Returns a new Arity20 based on this Tuple20.
Returns a new Arity21 based on this Tuple21.
Returns a new Arity22 based on this Tuple22.
Returns a new Arity2 based on this Tuple2.
Returns a new Arity3 based on this Tuple3.
Returns a new Arity4 based on this Tuple4.
Returns a new Arity5 based on this Tuple5.
Returns a new Arity6 based on this Tuple6.
Returns a new Arity7 based on this Tuple7.
Returns a new Arity8 based on this Tuple8.
Returns a new Arity9 based on this Tuple9.
Utility method to create dataframe from list
Utility method to create Dataset
Utility method to create Dataset
Utility method to convert JavaDoubleRDD to JavaRDD<Double>.
Utility method to create dataset from list
Utility method to create dataset from JavaRDD
Utility method to create dataset from RDD
Utility method to convert JavaRDD<Number> to JavaDoubleRDD.
Converts nullable value to Optional.
Converts Scala Option to Java Optional.
Returns a new Tuple10 based on this Arity10.
Returns a new Tuple11 based on this Arity11.
Returns a new Tuple12 based on this Arity12.
Returns a new Tuple13 based on this Arity13.
Returns a new Tuple14 based on this Arity14.
Returns a new Tuple15 based on this Arity15.
Returns a new Tuple16 based on this Arity16.
Returns a new Tuple17 based on this Arity17.
Returns a new Tuple18 based on this Arity18.
Returns a new Tuple19 based on this Arity19.
Returns a new Tuple1 based on this Arity1.
Returns a new Tuple20 based on this Arity20.
Returns a new Tuple21 based on this Arity21.
Returns a new Tuple22 based on this Arity22.
Returns a new Tuple2 based on this Arity2.
Returns a new Tuple3 based on this Arity3.
Returns a new Tuple4 based on this Arity4.
Returns a new Tuple5 based on this Arity5.
Returns a new Tuple6 based on this Arity6.
Returns a new Tuple7 based on this Arity7.
Returns a new Tuple8 based on this Arity8.
Returns a new Tuple9 based on this Arity9.
Provides a type hint about the expected return value of this column. This information can be used by operations such as select
on a Dataset to automatically convert the results into the correct JVM types.
Obtains a NamedUserDefinedFunction1 that wraps the given agg so that it may be used with Data Frames.
Obtains a UserDefinedFunction1 created from an Aggregator created by the given arguments so that it may be used with Data Frames.
Obtains a NamedUserDefinedFunction1 that wraps the given agg so that it may be used with Data Frames. so that it may be used with Data Frames.
Obtains a UserDefinedFunction1 that wraps the given agg so that it may be used with Data Frames.
Defines a UDF (UserDefinedFunction0) instance based on the (lambda) function func. For example: val myUdf = udf { -> ... }
Defines a UDF (UserDefinedFunction10) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10 -> ... }
Defines a UDF (UserDefinedFunction11) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11 -> ... }
Defines a UDF (UserDefinedFunction12) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12 -> ... }
Defines a UDF (UserDefinedFunction13) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13 -> ... }
Defines a UDF (UserDefinedFunction14) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14 -> ... }
Defines a UDF (UserDefinedFunction15) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15 -> ... }
Defines a UDF (UserDefinedFunction16) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16 -> ... }
Defines a UDF (UserDefinedFunction17) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17 -> ... }
Defines a UDF (UserDefinedFunction18) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18 -> ... }
Defines a UDF (UserDefinedFunction19) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19 -> ... }
Defines a UDF (UserDefinedFunction1) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1 -> ... }
Defines a vararg UDF (UserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf { t1: Array<T> -> ... }
Defines a vararg UDF (UserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf { t1: BooleanArray -> ... }
Defines a vararg UDF (UserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf { t1: ByteArray -> ... }
Defines a vararg UDF (UserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf { t1: DoubleArray -> ... }
Defines a vararg UDF (UserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf { t1: FloatArray -> ... }
Defines a vararg UDF (UserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf { t1: IntArray -> ... }
Defines a vararg UDF (UserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf { t1: LongArray -> ... }
Defines a vararg UDF (UserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf { t1: ShortArray -> ... }
Defines a UDF (UserDefinedFunction20) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19, t20: T20 -> ... }
Defines a UDF (UserDefinedFunction21) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19, t20: T20, t21: T21 -> ... }
Defines a UDF (UserDefinedFunction22) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19, t20: T20, t21: T21, t22: T22 -> ... }
Defines a UDF (UserDefinedFunction2) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2 -> ... }
Defines a UDF (UserDefinedFunction3) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3 -> ... }
Defines a UDF (UserDefinedFunction4) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4 -> ... }
Defines a UDF (UserDefinedFunction5) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5 -> ... }
Defines a UDF (UserDefinedFunction6) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6 -> ... }
Defines a UDF (UserDefinedFunction7) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7 -> ... }
Defines a UDF (UserDefinedFunction8) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8 -> ... }
Defines a UDF (UserDefinedFunction9) instance based on the (lambda) function func. For example: val myUdf = udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9 -> ... }
Creates a UDF (NamedUserDefinedFunction0) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction10) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction11) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction12) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction13) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction14) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction15) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction16) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction17) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction18) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction19) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction1) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a vararg UDF (NamedUserDefinedFunctionVararg) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction20) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction21) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction22) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction2) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction3) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction4) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction5) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction6) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction7) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction8) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Creates a UDF (NamedUserDefinedFunction9) from a function reference adapting its name by reflection. For example: val myUdf = udf(::myFunction)
Defines a named UDF (NamedUserDefinedFunction0) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { -> ... }
Name can also be supplied using delegate: val myUdf by udf { -> ... }
Defines a named UDF (NamedUserDefinedFunction10) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10 -> ... }
Defines a named UDF (NamedUserDefinedFunction11) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11 -> ... }
Defines a named UDF (NamedUserDefinedFunction12) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12 -> ... }
Defines a named UDF (NamedUserDefinedFunction13) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13 -> ... }
Defines a named UDF (NamedUserDefinedFunction14) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14 -> ... }
Defines a named UDF (NamedUserDefinedFunction15) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15 -> ... }
Defines a named UDF (NamedUserDefinedFunction16) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16 -> ... }
Defines a named UDF (NamedUserDefinedFunction17) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17 -> ... }
Defines a named UDF (NamedUserDefinedFunction18) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18 -> ... }
Defines a named UDF (NamedUserDefinedFunction19) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19 -> ... }
Defines a named UDF (NamedUserDefinedFunction1) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1 -> ... }
Defines a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf("myUdf") { t1: Array<T> -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: Array<T> -> ... }
Defines a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf("myUdf") { t1: BooleanArray -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: BooleanArray -> ... }
Defines a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf("myUdf") { t1: ByteArray -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: ByteArray -> ... }
Defines a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf("myUdf") { t1: DoubleArray -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: DoubleArray -> ... }
Defines a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf("myUdf") { t1: FloatArray -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: FloatArray -> ... }
Defines a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf("myUdf") { t1: IntArray -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: IntArray -> ... }
Defines a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf("myUdf") { t1: LongArray -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: LongArray -> ... }
Defines a named vararg UDF (NamedUserDefinedFunctionVararg) instance based on the (lambda) function varargFunc. For example: val myUdf = udf("myUdf") { t1: ShortArray -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: ShortArray -> ... }
Defines a named UDF (NamedUserDefinedFunction20) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19, t20: T20 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19, t20: T20 -> ... }
Defines a named UDF (NamedUserDefinedFunction21) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19, t20: T20, t21: T21 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19, t20: T20, t21: T21 -> ... }
Defines a named UDF (NamedUserDefinedFunction22) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19, t20: T20, t21: T21, t22: T22 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, t13: T13, t14: T14, t15: T15, t16: T16, t17: T17, t18: T18, t19: T19, t20: T20, t21: T21, t22: T22 -> ... }
Defines a named UDF (NamedUserDefinedFunction2) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2 -> ... }
Defines a named UDF (NamedUserDefinedFunction3) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3 -> ... }
Defines a named UDF (NamedUserDefinedFunction4) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4 -> ... }
Defines a named UDF (NamedUserDefinedFunction5) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5 -> ... }
Defines a named UDF (NamedUserDefinedFunction6) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6 -> ... }
Defines a named UDF (NamedUserDefinedFunction7) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7 -> ... }
Defines a named UDF (NamedUserDefinedFunction8) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8 -> ... }
Defines a named UDF (NamedUserDefinedFunction9) instance based on the (lambda) function func. For example: val myUdf = udf("myUdf") { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9 -> ... }
Name can also be supplied using delegate: val myUdf by udf { t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9 -> ... }
Creates a UDF (NamedUserDefinedFunction0) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction10) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction11) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction12) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction13) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction14) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction15) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction16) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction17) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction18) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction19) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction1) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a vararg UDF (NamedUserDefinedFunctionVararg) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction20) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction21) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction22) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction2) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction3) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction4) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction5) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction6) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction7) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction8) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Creates a UDF (NamedUserDefinedFunction9) from a function reference. For example: val myUdf = udf("myFunction", ::myFunction)
Unary minus, i.e. negate the expression.
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. In every batch the updateFunc will be called for each state even if there are no new values. Hash partitioning is used to generate the RDDs with Spark's default number of partitions. Note: Needs checkpoint directory to be set.
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. In every batch the updateFunc will be called for each state even if there are no new values. [org.apache.spark.Partitioner] is used to control the partitioning of each RDD. Note: Needs checkpoint directory to be set.
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. org.apache.spark.Partitioner is used to control the partitioning of each RDD. Note: Needs checkpoint directory to be set.
Wrapper for spark creation which copies params from sparkConf.
Wrapper for spark creation which allows setting different spark params.
Wrapper for spark streaming creation. spark: SparkSession
and ssc: JavaStreamingContext
are provided, started, awaited, and stopped automatically. The use of a checkpoint directory is optional. If checkpoint data exists in the provided checkpointPath
, then StreamingContext will be recreated from the checkpoint data. If the data does not exist, then the provided factory will be used to create a JavaStreamingContext.