Performance Test Engineer at a tech vendor with 10,001+ employees
Real User
Top 20
Mar 27, 2026
My main use case for MarkLogic involves running queries to check some of the jobs. I run batch jobs and then I want to check whether the batch jobs are running fine. I check the data on MarkLogic by running the query on the query logic portal. Regarding my main use case with MarkLogic, I find it very handy because every time I run a job, I go and run the query. I go to different databases and then see whether it's running fine. It is enjoyable working with MarkLogic. A recent task where MarkLogic was especially helpful involved trying to check the number of batch jobs, DES or PDM jobs, and different jobs. We always check the number and then based on the number, we compare with other tools and then see whether it's matching. It is a comparison with multiple tools. If, for example, PG Admin was not working with PostgreSQL, but MarkLogic was working fine, we were able to fix the issue on the other tools which were not working.
Senior Data Engineer at a insurance company with 10,001+ employees
Real User
Top 10
Mar 27, 2026
MarkLogic's primary use case in my experience is handling semi-structured and hierarchical data that does not fit neatly into traditional relational tables. In my particular project, I work with NoSQL data, meaning I handle semi-structured and hierarchical data. For example, in one of my insurance projects at ValueMomentum, I worked on an initiative where I had policy and claims data in XML format coming from different legacy systems. I used MarkLogic to ingest and normalize the data and integrate it with our Hive data warehouse for reporting and analytics. A specific example I would share is that I created a 360-degree view of customer policies and claims using MarkLogic, which allowed me to merge XML claim documents with relational customer information, perform queries across different nodes, and feed clean aggregated data into the ETL pipeline for downstream analytics. This significantly saved time compared to flattening everything manually into SQL tables. Working with XML and integrating that with my ETL processes is quite interesting, as MarkLogic makes it far easier to handle hierarchical data instead of attempting to force it all into relational tables. The built-in support for XML and JSON, combined with the indexing and searching capabilities, allows me to query deeply nested structures without writing extensive custom parsing code. The challenging part is mostly around integration with our existing ETL pipelines, ensuring that transformed data flows correctly into Hive and matches the relational schemas. Sometimes I have to be careful with data types and schema evaluation because MarkLogic is schema-flexible, but the downstream systems are strict. Overall, though, it speeds up handling complex semi-structured data and reduces manual transformation work significantly. I would say it is easier than many other XML handling approaches I have tried. I would add that I really appreciate how MarkLogic handles hierarchical relationships naturally, especially in insurance data where nested information such as policies contain multiple coverages, each with different claims and documents. This aspect of the insurance domain is really cumbersome, and MarkLogic allows me to query across these nested structures directly without having to flatten everything at the beginning. Its search and indexing features make it easier to identify anomalies or missing information in the semi-structured data before it reaches the ETL pipelines, saving considerable time during validation and reconciliation. Overall, I find it a very practical tool for bridging legacy data formats with modern data warehouses.
I use MarkLogic for performing many operations such as handling semi-structured data and building search-driven use cases. For example, I examined how it can handle and store XML documents and leverage its powerful indexing and search capabilities for fast querying. It is particularly useful in scenarios where you need flexible schemas along with advanced search capabilities including filtering, full-text search, and aggregations. A task where MarkLogic was central to my work was a travel search and filtering system, similar to a hotel listing use case. In this project, the data was semi-structured, with elements such as hotel details, amenities, and pricing that vary significantly across different entities. I used MarkLogic to store this data as JSON documents. What made MarkLogic central was its built-in indexing and search capabilities. I configured indexing on fields such as location, price range, and amenities and then used its query capabilities to perform fast filtering and full-text search. Instead of relying on a separate search system, MarkLogic handled both data storage and search efficiently, which simplified my architecture and improved query performance.
My main use case for MarkLogic is for a specific client where I handle XML data in the backend. I use XQuery, and we utilize MarkLogic mostly for querying the data in the backend. In one of my use cases, we stored customer policy records as an XML document in MarkLogic. Each document contained details such as policy number, customer name, and other metadata. A common requirement was that users often did not remember the full policy number, so they searched using partial input, such as part of the policy number or their name. I implemented XQuery-based search logic along with proper indexing, so MarkLogic could efficiently return the matching document even with incomplete input. This significantly improved search performance and user experience. Apart from the search cases, one key advantage we leveraged with MarkLogic's schema flexibility was that since our data was stored in XML, we could easily accommodate changes in the structure without major migration. I also worked on optimizing query performance by configuring indexes properly, which reduced query response time significantly. Additionally, we used MarkLogic as a central data store, integrated with the backend service through APIs, ensuring fast and reliable data access. We also ensured that the queries were written efficiently and aligned with the index configurations to avoid full document scans, which is critical for performance in MarkLogic.
Non IT Recruiter at a computer software company with 11-50 employees
Real User
Top 10
Mar 24, 2026
My main use case for MarkLogic is for storing data. A quick specific example of the kind of data I store in MarkLogic is that we have a data warehouse, and we use it as a NoSQL database to store, manage and search complex heterogeneous data.
full stack developer at a educational organization with 1,001-5,000 employees
Real User
Top 10
Jan 6, 2026
My main use case for MarkLogic is data development. We have a banking type of data, so we used to convert it to PDF or use some for applications we were running. A quick, specific example of how I use MarkLogic with my banking data involves creating PDFs based on XML files and developing subscription kind of applications. In that, we used to send scheduled PDFs and some scheduled emails generated from that application using MarkLogic. In addition to my main use case, I also perform development and some MarkLogic admin work as well.
Software Engineer at Thomson Reuters International Pvt. Ltd.
Real User
Feb 24, 2023
We are using the solution for source data to put into a database to ingest all the data. We are using it to convert everything into a standard format. We use it to run over the content to ensure everything is running properly. It's great for ingesting data and standardizing everything.
MarkLogic offers robust capabilities for data storage and retrieval, supporting multiple formats like XML and JSON. Its built-in search and indexing facilitate rapid data querying, making it efficient for industries demanding quick data management solutions.Boasting flexibility in data management, MarkLogic supports XML and JSON formats without strict schemas, integrating storage and search within a single platform to reduce complexity. This configuration enhances data handling, performance,...
My main use case for MarkLogic involves running queries to check some of the jobs. I run batch jobs and then I want to check whether the batch jobs are running fine. I check the data on MarkLogic by running the query on the query logic portal. Regarding my main use case with MarkLogic, I find it very handy because every time I run a job, I go and run the query. I go to different databases and then see whether it's running fine. It is enjoyable working with MarkLogic. A recent task where MarkLogic was especially helpful involved trying to check the number of batch jobs, DES or PDM jobs, and different jobs. We always check the number and then based on the number, we compare with other tools and then see whether it's matching. It is a comparison with multiple tools. If, for example, PG Admin was not working with PostgreSQL, but MarkLogic was working fine, we were able to fix the issue on the other tools which were not working.
MarkLogic's primary use case in my experience is handling semi-structured and hierarchical data that does not fit neatly into traditional relational tables. In my particular project, I work with NoSQL data, meaning I handle semi-structured and hierarchical data. For example, in one of my insurance projects at ValueMomentum, I worked on an initiative where I had policy and claims data in XML format coming from different legacy systems. I used MarkLogic to ingest and normalize the data and integrate it with our Hive data warehouse for reporting and analytics. A specific example I would share is that I created a 360-degree view of customer policies and claims using MarkLogic, which allowed me to merge XML claim documents with relational customer information, perform queries across different nodes, and feed clean aggregated data into the ETL pipeline for downstream analytics. This significantly saved time compared to flattening everything manually into SQL tables. Working with XML and integrating that with my ETL processes is quite interesting, as MarkLogic makes it far easier to handle hierarchical data instead of attempting to force it all into relational tables. The built-in support for XML and JSON, combined with the indexing and searching capabilities, allows me to query deeply nested structures without writing extensive custom parsing code. The challenging part is mostly around integration with our existing ETL pipelines, ensuring that transformed data flows correctly into Hive and matches the relational schemas. Sometimes I have to be careful with data types and schema evaluation because MarkLogic is schema-flexible, but the downstream systems are strict. Overall, though, it speeds up handling complex semi-structured data and reduces manual transformation work significantly. I would say it is easier than many other XML handling approaches I have tried. I would add that I really appreciate how MarkLogic handles hierarchical relationships naturally, especially in insurance data where nested information such as policies contain multiple coverages, each with different claims and documents. This aspect of the insurance domain is really cumbersome, and MarkLogic allows me to query across these nested structures directly without having to flatten everything at the beginning. Its search and indexing features make it easier to identify anomalies or missing information in the semi-structured data before it reaches the ETL pipelines, saving considerable time during validation and reconciliation. Overall, I find it a very practical tool for bridging legacy data formats with modern data warehouses.
I use MarkLogic for performing many operations such as handling semi-structured data and building search-driven use cases. For example, I examined how it can handle and store XML documents and leverage its powerful indexing and search capabilities for fast querying. It is particularly useful in scenarios where you need flexible schemas along with advanced search capabilities including filtering, full-text search, and aggregations. A task where MarkLogic was central to my work was a travel search and filtering system, similar to a hotel listing use case. In this project, the data was semi-structured, with elements such as hotel details, amenities, and pricing that vary significantly across different entities. I used MarkLogic to store this data as JSON documents. What made MarkLogic central was its built-in indexing and search capabilities. I configured indexing on fields such as location, price range, and amenities and then used its query capabilities to perform fast filtering and full-text search. Instead of relying on a separate search system, MarkLogic handled both data storage and search efficiently, which simplified my architecture and improved query performance.
My main use case for MarkLogic is for a specific client where I handle XML data in the backend. I use XQuery, and we utilize MarkLogic mostly for querying the data in the backend. In one of my use cases, we stored customer policy records as an XML document in MarkLogic. Each document contained details such as policy number, customer name, and other metadata. A common requirement was that users often did not remember the full policy number, so they searched using partial input, such as part of the policy number or their name. I implemented XQuery-based search logic along with proper indexing, so MarkLogic could efficiently return the matching document even with incomplete input. This significantly improved search performance and user experience. Apart from the search cases, one key advantage we leveraged with MarkLogic's schema flexibility was that since our data was stored in XML, we could easily accommodate changes in the structure without major migration. I also worked on optimizing query performance by configuring indexes properly, which reduced query response time significantly. Additionally, we used MarkLogic as a central data store, integrated with the backend service through APIs, ensuring fast and reliable data access. We also ensured that the queries were written efficiently and aligned with the index configurations to avoid full document scans, which is critical for performance in MarkLogic.
My main use case for MarkLogic is for storing data. A quick specific example of the kind of data I store in MarkLogic is that we have a data warehouse, and we use it as a NoSQL database to store, manage and search complex heterogeneous data.
My main use case for MarkLogic is data development. We have a banking type of data, so we used to convert it to PDF or use some for applications we were running. A quick, specific example of how I use MarkLogic with my banking data involves creating PDFs based on XML files and developing subscription kind of applications. In that, we used to send scheduled PDFs and some scheduled emails generated from that application using MarkLogic. In addition to my main use case, I also perform development and some MarkLogic admin work as well.
We are using the solution for source data to put into a database to ingest all the data. We are using it to convert everything into a standard format. We use it to run over the content to ensure everything is running properly. It's great for ingesting data and standardizing everything.