Welcome to our DPL (Data Processing Language) platform, a powerful tool for processing and visualizing data using Python, Scala, or Java. Our platform offers a range of features and functionalities that make it easy to configure, run and share data blocks and flows using an intuitive web or CLI interface.
Key Features
Intuitive WEB/CLI Interface
Our platform comes with an intuitive web/CLI interface that allows you to easily configure, run and share data blocks and flows. You can use our DPL configuration to create data processing jobs using Python (Pandas/PySpark), Scala, or Java. The interface is user-friendly and provides detailed JSON support.
Common Framework for Data Processing Jobs
Our platform provides a common framework for data processing jobs using Python (Pandas/PySpark), Scala, or Java. This framework is based on a detailed JSON format, which makes it easy to create and manage data processing jobs. It ensures that data processing jobs are executed in a consistent and reliable manner, regardless of the programming language you choose.
Data Visualization
Our platform also provides data visualization features in web containers such as grids and tables with pivoting and export options. This makes it easy to analyze and present data in a clear and concise manner. You can visualize data in real-time and share the results with your team members.
Role-based Access Controls
Our platform offers role-based access controls, which ensure that users only have access to the data they need. This helps to protect sensitive data and prevent unauthorized access. You can configure access controls based on user roles and control access to specific data blocks and flows.
PYRO Server
Our platform also comes with a PYRO server for OS-level access on DPL hosts. This server provides a secure and reliable way to access data on DPL hosts and allows you to manage data processing jobs from a central location.
DPL Engine
Our platform offers two engines: the PyPandas engine and the DPL engine. The PyPandas engine is based on Python and provides a powerful and flexible way to process data. The DPL engine is based on PySpark and provides a distributed and scalable way to process large datasets.
DPL Jobs
Our platform also supports DPL jobs written in Scala. This allows you to leverage the power of Scala to create complex data processing jobs that can handle large volumes of data.
Central Object Library
Our platform also provides a central object library that includes connectors, data sources, data targets, data blocks, data visualization and data flows. This library makes it easy to combine data from multiple sources and process it to build a denormalized view via the DPL UI.
Integration to Standard Reporting Interfaces
Finally, our platform provides integration with standard reporting interfaces, making it easy to generate reports and share them with your team members. You can export data to various formats, including PDF, CSV and Excel.