Pimcore is an open-source software platform used by more than 110 000 companies worldwide. It offers state-of-the-art digital asset management (DAM), product information management (PIM), master data management (MDM), digital experience management (DXP/CMS), multi-channel publishing, and digital commerce. Acclaimed by analysts at Gartner and Forrester, its customers include Fortune 100 companies such as Pepsi, Sony, and Audi. The company's headquarters are in Salzburg, Austria.
Pimcore’s consolidated platform enables organizations to manage product information, digital assets, digital commerce, and web content in a consolidated setup, thus empowering them with a single ‘trusted view’ of information and delivering a superior product experience to their customers across multiple touchpoints.
What makes Pimcore a great open-source alternative to proprietary, closed-source software is its incredible flexibility, 100% API-driven architecture, composable technology, speed-to-market, plus the momentum of 150+ global solution partners. The Pimcore Platform™ is loved by developers, agencies, and enterprises. For more information, please visit pimcore.com.
Based on our record, Apache Spark seems to be a lot more popular than Pimcore. While we know about 56 links to Apache Spark, we've tracked only 2 mentions of Pimcore. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.
Also, I have Pimcore up and running on AWS as well too .. https://pimcore.com/en. Source: over 2 years ago
Pimcore, by definition, is an open-source digital experience platform with superb data management capabilities. Source: over 2 years ago
Recently I had to revisit the "JVM languages universe" again. Yes, language(s), plural! Java isn't the only language that uses the JVM. I previously used Scala, which is a JVM language, to use Apache Spark for Data Engineering workloads, but this is for another post 😉. - Source: dev.to / 3 months ago
Consume data into third party software (then let Open Search or Apache Spark or Apache Pinot) for analysis/datascience, GIS systems (so you can put reports on a map) or any ticket management system. - Source: dev.to / 4 months ago
Also, this knowledge applies to learning more about data engineering, as this field of software engineering relies heavily on the event-driven approach via tools like Spark, Flink, Kafka, etc. - Source: dev.to / 6 months ago
Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features. - Source: dev.to / 6 months ago
A JVM based framework named "Spark", when https://spark.apache.org exists? - Source: Hacker News / 12 months ago
Syndigo Content Experience Hub - Content Experience Hub Your end-to-end solution to collect, create, enrich, manage, syndicate, and analyze all your digital assets, Core Marketing, and Enhanced product content. Request [...]
Apache Flink - Flink is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations.
Boomi - The #1 Integration Cloud - Build Integrations anytime, anywhere with no coding required using Dell Boomi's industry leading iPaaS platform.
Apache Airflow - Airflow is a platform to programmaticaly author, schedule and monitor data pipelines.
Akeneo - Akeneo is an open-source Product Information Management solution.
Hadoop - Open-source software for reliable, scalable, distributed computing