Specific components of the master data management system are designed with the aim at needs of a master data management solution. These come with Sun Master Index, Sun Data Integrator, Sun Data Mashup Engine and the Data Quality tools. These parts offer data cleansing, profiling, standardizing, matching and stewardship to the master data management system.
Sun Master Index
First and foremost, Sun Master Index comes with a flexible framework used to design and create custom single-view apps or master indexes. A master index containing the most recent and correct data about every business sector is at the center of the master data management solution. Sun Master Index offers a wizard which will guide you through all the phases to create a master index app. Thanks to the wizard, you will be able to define a custom master index with a data structure, processing logic as well as matching logic which are all aimed at the type of data you are dealing with. Sun Master Index also offers a graphical editor so that you can personalize the business logic, such as matching, standardizing, querying and so on.
Sun Master Index handles the problems of dispersed data and poor quality data by identifying the common records, making use of data cleansing and matching technology in order to set up a cross-index of many different local identifiers in an automatic way that an entity may have. Apps then could take advantage of the information stored by the master index in order to gain a complete view of an entity because master index operations can be used as services. Sun Master Index also offers the capability to monitor and maintain reference data via a personalized web-based user interface named as the Master Index Data Manager.
Sun Data Integrator
Sun Data Integrator is considered to be an extract and loading tool made for high-performance ETL processing of heavy data between files and databases. It is in charge of high-volume data transfer and transition between a lot of different data sources, such as relational data sources and non-relation ones. Sun Data Integrator is specifically made to process huge amounts of data sets, making it a good choice to use for loading data from a wide variety of systems throughout a company into master index databases.
Sun Data Integrator offers a wizard to get you through the different phase of making basic and advanced ETL mappings and collaborations. It also offers the choice to generate a staging database and heavy loader for the legacy data which will be delivered into a master index database. These options are up to the object structure defined for the master index. The ETL Collaboration Editor enables you to customize the necessary mappings and transformations quickly and easily, as well as provides support for a comprehensive suite of data operators.
Sun Data Integrator also works in the Master Data Management system in order to reduce the time required to match and load huge amounts of data into the master index database.
Sun Data Mashup Engine
The Sun Data Mashup Engine offers an aggregation of information in real time from a wide variety of sources into a single view. It is able to aggregate data from delimited flat files, fixed-width ones, relational databases and so on. The Sun Data Mashup Engine will extract and remake the data in order to aggregate it into a report working as a virtual database or a web service. Data Mashup can work in the Master Data Management suite in order to expose some Master Data Management Suite data sources as services.
Sun Data Quality and Load tools
As a standard, Sun Master Index takes advantage of the Master Index Match Engine as well as Master Index Standardization Engine in order to standardize and match the collected data. Extra tools are made directly from the master index app and utilize the object structure which was defined for the master index. These tools are listed as the Data Profiler, Data Cleanser and the Initial Bulk Match and Load tool.
Master Index Standardization Engine
The standardization engine is established on a framework which is highly configurable and considerably extensive in order to allow for standardization of many different types of data coming from various languages and countries. It performs parsing, normalizing as well as phonetic encoding of the data being delivered to the master index or being loaded to the master index database. Parsing is the processing in which a field is broken down to different individual components, such as separating a street address into a name, house number, district and so on. Normalization will change a field value to its common form, such as modifying a nickname to its standard version. Phonetic encoding enables queries to be responsible for spelling and input mistakes. The standardization process is capable of cleansing the data before matching, offering the data to the match engine in a common form so that the match weight can be accurate.
Master Index Match Engine
The match engine offers the basis for deduplication with its record matching abilities. The match engine will make a comparison with the match fields in two records as well as calculate the match weight for every match field. After that, it will total the weights for all match fields in order to offer a composite match weight between the records. This weight shows how possible it is for the two records to show the same entity. The Master Index Match Engine is an engine of high performance, making use of proven methods. The engine is established on a configurable framework by which you are able to personalize the current comparison functions and to create as well as plug in custom functions.
Data Profiler will carry out a lot of different analyses frequently. You can profile data before cleansing so that you can decide how to define cleansing rules, as well as being able to profile data after cleansing in order to fine-tune query blocking definitions or matching rules.