I am currently working on a Storm Crawler based project. In the current project, we modified some Bolts and Spouts of the original Storm Crawler Core artifact. For example, we changed some parts of ParserBolt or etc. In addition, we develop some processing steps in the above project. Our Bolts has been mixed with the original Storm Crawler project. For example, I have an image classifier that gives some images from the Storm Crawler and does some classification on that. Now, I am going to separate the crawl phase from the processing phase. For the crawl phase, I want to use the latest version of Storm Crawler and save its results into a Solr collection named Docs. For the second phase (which is independent of the crawl phase), I have another Storm based project which has not any relation to Storm Crawler. The input tuples of the second topology need to be feed from the Docs collection. I have not any idea about feeding documents from the Solr collection to the second storm topology. Is it a good design architecture or not? If yes, what is a good way for importing data to the second topology? It also should be noted that I want to use these projects without any downtime.
Separation of crawl phase from processing phase in Storm Crawler
108 views Asked by aeranginkaman At
1
There are 1 answers
Related Questions in SOLR
- Upgrading to Solr 9 failes due to NoSuchFileException
- regex to produce duplicate string with modification
- Apache atlas UI not showing up
- SAP Commerce Cloud multisite SOLR configuration
- Solr 9 punctuation issue
- Accessing solr web interface behind reverse proxy returns "Content Encoding Error"
- Getting NPE in apache SOLR 8.11.2 while doing atomic update using add-distinct from my java based appication
- how to specify the maximum number of clusters for the STC algorithm in Solr admin console?
- SOLR compatibility of the KNN query parser with function queries
- How to use Solr as retriever in RAG
- Multiple replacement / substitute NGgram string SOLR 8.6
- Solr updates are taking too long. The update requests are stalling
- solrCloud(9.5) integrates springboots, and adds user authentication, and there is no problem with queries, but the new one keeps reporting errors
- Why does Spring Data for Apache Solr run a count query before running the actual query?
- SOLR 'facet.prefix' is not working as expected
Related Questions in ARCHITECTURE
- Where to store secret token for an embeddable web widget?
- Separation of Students and Users in NestJS Microservice architecture
- What's the right ZMQ architecture for my scenario?
- Javers in microservice architecture
- How to prevent users from creating custom client apps?
- How to manage different repositories for different clients with the same project?
- Adding users file storage feature to my application
- Transform Load pipeline for a logs system: Apache Airflow or Kafka Connect?
- Shoulld I decode JWT only on auth server?
- How to stored last ~1500 events in Sorted Set in Redis
- Should data be standardized on the backend or the client (front-end, mobile app)?
- Can I treat CNN channels separately to make placement predictions?
- How to handle sync distributed transaction in microservices?
- Database design, authentication and authorization in a microservices ticketing system
- Is there any example or design of a queue system in microservices?
Related Questions in APACHE-STORM
- ERROR: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "maprfs"
- Use rack aware policy for kafka in apache storm
- Apache storm + Kafka Spout
- Getting classCastException when upgrade from strom/zookeepr 2.5/3.8.0 to 2.6/3.9.1
- Does SGX or Gramine support mmap files?
- Apache Storm: Get Blob download exception in Nimbus log
- Apache Storm: can't receive tuples from multiple bolts
- How to make apache storm as FIPS (Federal Information Processing Standard ) compliant
- one bolt recive from 2 others in streamparse python
- How to deploy a topology on Apache Storm Nimbus deployed on AWS ECS
- How to store custom metatags in elasticsearch index from a website using stormcrawler
- conf/storm.yaml is not populated with values coming from config map
- How to process late tuples from BaseWindowedBolt?
- Unable to Start this Storm Project
- Handing skewed processing time of events in a streaming application
Related Questions in STORMCRAWLER
- Problem passing crawler configuration yaml files to stormcrawler
- Unable to Inject URL seed file in stormcrawler
- Unable to install Stormcrawler error with connection refusal port 7071
- How to store custom metatags in elasticsearch index from a website using stormcrawler
- KryoException: Buffer underflow error in Apache Storm and Storm-Crawler
- How do I set log level in stormcrawler/storm local?
- Storm Crawler to fetch urls with query string
- StormCrawler: urlfrontier.StatusUpdaterBolt performance bottleneck
- StormCrawler - Metadata fields not being persisted
- Logging DEBUG messages in Stormcrawler
- I started web crawling using Storm Crawler but I do not know where crawled results go to? Im not using Solr or Elastic Search
- StormCrawler: setting "maxDepth": 0 prevents ES seed injection
- Problem running example topology with storm-crawler 2.3-SNAPSHOT
- Replacement of ESSeedInjector in storm-crawler 2.2
- What is the meaning of bucket in StormCrawler spouts?
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
That is an opinion-based question but to answer it, you definitely can separate your pipeline into multiple topologies. It is a good practice when you need different types of hardware e.g. GPU for image processing vs cheaper instances for the crawl.
You could index your documents into SOLR but other solutions would also work, for instance queues etc... What you will need on the second topology is a bespoke SOLR spout. If you want the 2nd project to be independent from SC, you won't be able to leverage the code from our SOLR module but you could take it as a source of inspiration.
There might be better approaches depending on your architecture in general and whether the 2nd topology needs to ingest the content of the images. That's beyond the scope of technical questions on StackOverflow though.