Spark recently fleshed out the ML Pipeline stuff so I have been looking into writing my own transformers. However some useful utilities are private to spark or ml. Take for example the Identifiable trait / object which are private to spark. I would very much like to use the randomUID method and am curious as to why this is not exposed?
Private objects and traits in spark and ml
657 views Asked by Chris At
1
There are 1 answers
Related Questions in SCALA
- Mocking AmazonS3 listObjects function in scala
- Last SPARK Task taking forever to complete
- How to upload a native scala project to local repo by sbt like using "maven install"
- Folding a list of OR clauses in io.getquill
- How to get latest modified file using scala from a folder in HDFS
- Enforce type bound for inferred type parameter in pattern matching
- can't write pyspark dataframe to parquet file on windows
- spark streaming and kafka integration dependency problem
- how to generate fresh singleton literal type in scala using macros
- exception during macro expansion: type T is not a class, play json
- Is there any benefit of converting a List to a LazyList in Scala?
- Get all records within a window in spark structured streaming
- sbt publishLocal of a project with provided dependencies in build.sbt doesn't make these dependencies visible to projects using the project as library
- Scala composition of partially-applied functions
- How to read the input json using a schema file and populate default value if column not being found in scala?
Related Questions in APACHE-SPARK
- Getting error while running spark-shell on my system; pyspark is running fine
- ingesting high volume small size files in azure databricks
- Spark load all partions at once
- Databricks Delta table / Compute job
- Autocomplete not working for apache spark in java vscode
- How to overwrite a single partition in Snowflake when using Spark connector
- Parse multiple record type fixedlength file with beanio gives oom and timeout error for 10GB data file
- includeExistingFiles: false does not work in Databricks Autoloader
- Spark connectors from Azure Databricks to Snowflake using AzureAD login
- SparkException: Task failed while writing rows, caused by Futures timed out
- Configuring Apache Spark's MemoryStream to simulate Kafka stream
- Databricks can't find a csv file inside a wheel I installed when running from a Databricks Notebook
- Add unique id to rows in batches in Pyspark dataframe
- Does Spark Dynamic Allocation depend on external shuffle service to work well?
- Does Spark structured streaming support chained flatMapGroupsWithState by different key?
Related Questions in APACHE-SPARK-MLLIB
- 'StringIndexerModel' object has no attribute '_java_obj'
- Scala Spark Collaborative Filter
- How to handle VectorAssembler errors in Pyspark?
- How to apply SMOTE algorithm (or an alternative) on a highly imbalanced PySpark dataset?
- FeatureStoreClient() log_model failing to run inference with mlflow.spark flavor
- Performance benefits of predict_batch_udf over a Pandas UDF?
- Spark doesn't use SGD as optimizer any more?
- Pyspark BucketedRandomProjectionLSH - count() after approxsimilarityjoin gives different results when i persist output
- spark-job on spark kubernetes cluster took long time to complete
- Is weightcol of spark random forest classifier used directly in impurity calculation?
- How to update required memory for single node Apache Spark Scala Job?
- How to change a sparse vector column in a dataframe into a dense one with PySpark? (or how to translate my Scala function to a pySpark one?)
- How to create a custom transformer using pySpark?
- Using PipelineModel.load() in custom MLFlow PyFunc class results in error
- Unable to Infer Spark ML Pipeline model when built using Custom Preprocessing Stages
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
The short version of the answer is that Spark is aiming for API stability, and anything where people think they might want to change how it functions is therefor marked as private. Part of this happens since as part of the PR merge process, if you have to be very explicit to make a new public API, so it's often easier to just make private versions of the things you need. I realize that this can maybe be a bit frustrating, if there is a specific part of Spark you think should be added to the public API you can try filing a JIRA.