top of page

Primary key method that will revolutionize Enterprise Systems (ES)

A globally unique primary key is one the of five principles that will allow us to create open-source ESs that have an inherent capability to exchange data and aggregate it for reporting and AI.

The goal is to establish a global network of open-source ESs customized to the unique needs of business units/organizations that are capable of exchanging data so that data is only ever entered once and securely shared with every other system that needs it.

Today's thinking suggests that we use local primary keys and then assign globally unique primary keys when transferring data to other systems/data warehouses. This approach hinders the global ES network goal because it makes the complexity of the architecture to transfer and aggregate data unmanageable. Instead, we need to assign a primary key once to data and never have it change no matter how many different ESs/data warehouses it gets transferred to.

While GUIDs may seem like a solution, they have several drawbacks, including being bulky (16 bytes long), causing bloating and retrieval performance issues, and being difficult to reference (e.g., 2109338f-448b-411a-89e8-c6320d28b52a).

To address these shortcomings, a new primary key algorithm has been developed and tested that automatically assigns a globally unique primary key to all master and transaction data using a function that has a nominal impact on performance.

The key is made up of two components: the id of the system that created the record and an incremented integer that uniquely identifies the record. These components are combined into an eight-byte key capable of storing a quadrillion different record IDs and 1.5 million system IDs.

The incremented integer component is essential for storing data sequentially, which reduces bloating and increases retrieval performance.

This primary key design aligns with the record governance principle, which requires knowing which ES created the record. It is embedded in core data models and is pivotal for streamlining the data transfer principle, which facilitates automatic data exchange and aggregation for reporting and AI.

For more information on primary keys, visit

I appreciate your comments, questions, and likes.

Thank you for your attention.

7 views0 comments

Recent Posts

See All

Recursive selects to rollup financial data

In this post I am going to describe the financial roll-up procedure that was developed using Postgres that is a part of an enterprise system platform that has an inherent capability to exchange data b

Need help creating an application framework

In this post, I have a #QuestionForGroup about creating an application framework to connect to the open-source platform I created using Postgres. The platform allows us to create enterprise systems th

Automatically consolidate data into a Data Warehouse

In this post I am going to describe the data warehouse generate procedure (DW procedure) that was developed using Postgres that is a part of an enterprise system platform that has an inherent capabili


bottom of page