If so, then https://solr.apache.org/ can be a solution, though there's a bit of setup involved. Oh yea, you get to write your own "search interface" too which would end up calling solr's api to find stuff. - Source: Reddit / 3 months ago
Developers will use their SQL database when searching for specific things like client names, product names, or address search. Now when you want to level up from there and search all tables you better off using a separated server with a specific program like https://solr.apache.org/. - Source: Reddit / 8 months ago
We’re using a self-managed OpenSearch node here, but you can use Lucene, SOLR, ElasticSearch or Atlas Search. - Source: Reddit / 8 months ago
Finally if you really want a full fledged solution the most popular are good old Sphinx http://sphinxsearch.com or Apache Solr https://solr.apache.org but as these take a bit of time to setup I'd make sure that's truly needed. Chances are unless your collection is the size of the Library of Congress you are probably fine with ripgrep on your own indexes as text or some R package or sqlite FTS5 extension... - Source: Reddit / 10 months ago
As a fellow developer, I would have to recommend all of the above. However in terms of priorities, I'd suggest using a standard tag system first and then branching into text search which is a bit more complicated. If you've never made a search function before or simply never made one for a large scale media app, I'd recommend checking out Solr. - Source: Reddit / about 1 year ago
For fast searching, it usually requires indexing the files in question. There are a number of text-file indexing solutions, many of which use xapian, sphinx, or lucene/solr under the hood. Based on conditions (watching files/directories, cron jobs, new-mail triggers, etc), they'll add/remove files to the index, and you can then use a corresponding command to compose queries across that data. If it's indexed, it... - Source: Reddit / about 1 year ago
I use (and do consulting) Apache Solr (https://solr.apache.org/) for large scale search (into the billions of records) latest project is https://www.domaincodex.com 320 million records of domain intelligence data feel free to message me. Good luck! - Source: Reddit / about 1 year ago
Assuming that you have access to some anime data or you are willing to collect them, here is how you can get started: Most CS programs around the world offer a course to their students that is supposed to teach them exactly what you just described. That course is usually known as Information Retrieval. Just google this term, and you will find a lot of information about the problem you are facing. If you don't... - Source: Reddit / over 1 year ago
ListItem(name='Apache Solr', website='https://solr.apache.org/', category='Full-Text Search', short_description='Solr is an open-source enterprise-search platform, written in Java. Its major features include full-text search, hit highlighting, faceted search, real-time indexing, dynamic clustering, database integration, NoSQL features and rich document handling.'),. - Source: Reddit / over 1 year ago
The pre-production and production environments were composed of front-end and search (Solr) servers, also operated internally. This was another area for regular maintenance costs. - Source: dev.to / over 1 year ago
There’s Apache Solr which runs the same backend as Elasticsearch. I’ve heard it’s easier to manage than ES while still being similar in capabilities. - Source: Reddit / over 1 year ago
The underlying technology of all search engines is similar from a data structure perspective: an inverted index, but the datastore is very different. Industry-standard solutions for search such as Elastic Search, Apache Solr are built for horizontal scaling. - Source: dev.to / over 1 year ago
PostgreSQL has a lot of advanced extensions and indexes to speed up the same. But one should keep in mind that these have their own limitations in terms of speed and might not work with languages other than English/multi-byte encoding, such as UTF-8. However, these fill the critical gap where the functionalities mentioned are necessary, but we cannot go for a full-fledged search solution such as Solr/Elastic Search. - Source: dev.to / over 1 year ago
The bigger problem is the search capability. If you had a typical dynamic website with a database and some backend (be it PHP or any other language) you might get away by implementing the search capability through various SQL queries. However that often doesn't scale well and is complex. Personally I would recommend to just immediately use something like Apache Solr. - Source: Reddit / over 1 year ago
Apache Solr is the popular, blazing-fast, open-source enterprise search platform built on Apache Lucene. Solr is a standalone search server with a REST-like API. You can put documents in it (called "indexing") via JSON, XML, CSV, or binary over HTTP. You query it via HTTP GET and receive JSON, XML, CSV, or binary results. - Source: dev.to / over 1 year ago
Do you know an article comparing Apache Solr to other products?
Suggest a link to a post with product alternatives.