Data Principles


Imagine how different the following tasks are:

  1. Generate a monthly report on sales data
  2. Test a theory against a body of astrophysics data
  3. Provide an analyst a UI to navigate related data and quickly build ad hoc reports
  4. Perform analysis on web traffic to modify an advertising campaign on the fly
  5. Send alerts to members of a social network about their associates’ activity
  6. Analyze social networks to ascertain relationships and propagation paths
  7. Indicate in a web shopping cart the likelihood of when an item will ship
  8. Reserve a seat on a flight
  9. Reserve a seat at a concert
  10. Transfer money or property between two or more accounts
  11. Update a ride sharer on the status of their vehicle
  12. Manage real-time military command-and-control operations

Figuring out how to wrangle the wildly disparate requirements of such systems is just another day in the life of a Data Engineer.  Such an engineer needs principles.

While the specific technologies of the trade come and go, certain approaches transcend time, and many of the biggest challenges are non-technical.  This guide attempts to be a compendium of the timeless and yet be something you can read in a single sitting with the idea that it can serve as a jumping-off point for deeper learning, exploration, and regular re-visiting.

I hope that this will be a living document.  I encourage your feedback.


I. Stay Grounded

Solve tractable and painful problems iteratively to build momentum.

  1. Tackle challenges with measurable outcomes.
  2. Dumpster dive to discover exploitable data.
  3. Establish the gaps in data to create the required information.
  4. Focus on data quality before system performance.
  5. Lean heavily on existing capabilities while bootstrapping.
  6. Mold an infrastructure iteratively as you learn the requirements.

II. Preserve Knowledge

Ensure durability, traceability, and immutability to maintain integrity and confidence.

  1. Start by durably persisting data unmodified with associated meta-data.
  2. Do not accept responsibility for data until it has been durably persisted.
  3. Attach a UUID to every object and use that as the sole way to reference it.
  4. Attach meta-data that documents everything pertinent about an object’s origins.
  5. Treat all data as immutable, supporting its modification through versioning.
  6. Retain data as long as is legal, safe, practical, and ethical.
  7. Establish a clear Source System of Record for every kind of data in your enterprise.
  8. Maintain an immutable record of your system’s dispositions and actions over time.

III. Limit Complexity

Maintain loose coupling between systems to sustain delivery velocity and reliability.

  1. Maintain a central registry for data schemas, event streams, and API endpoints.
  2. Normalize data to explicit schemas before publishing for general consumption.
  3. Evolve the schemas of interfaces with backward- and/or forward compatibility.
  4. Separate logical phases of processing with a message bus that supports queueing.
  5. Publish events to a message bus that facillitates reliable dissemination and replay.

IV. Innovate Sustainably

Make data accessible but track its usage to encourage convergence.

  1. Accumulate data in warehouses that facilitate efficient interactive exploration.
  2. Encourage exploratory activities but continually systematize them.
  3. Provide data to exploratory activities via mechanisms that track their existence.
  4. Run exploratory initiatives in sandboxes to constrain retention and redistribution.
  5. Encourage exploratory activities to publish their code for requirements analysis.

V. Expect Chaos

Design resiliency into systems at every level to preserve integrity and availability.

  1. Process events idempotently and independently of arrival ordering and delays.
  2. Process events in a transactional or eventually consistent fashion as appropriate.
  3. Process events such that a re-processing corrects an earlier mis-processing.
  4. Choose wisely between consistency and availability when network partitions occur.
  5. Process events in a fashion that anticipates the loss of a worker mid-job.
  6. Establish trigger- and delay-based patterns to reattempt event processing.

VI. Succeed Gracefully

Separate discrete workloads and establish appropriate patterns to scale effectively.

  1. Avoid collocating data that changes quickly with data that changes slowly.
  2. Leverage immutable self-contained documents when practical to stay simple.
  3. Favor stateless processing where possible to facilitate elastic scalability.
  4. Micro-batch timely stateful processing where eventual consistency is acceptable.
  5. Use sticky routing or optimistic locking for real-time, consistent, stateful processing.
  6. Leverage ACID-compliant technologies where transient inconsistency is intolerable.
  7. Isolate different workloads to prevent cache bulldozing and storage fragmentation.
  8. Isolate workloads of different SLAs/SLOs to prevent load-shock spill-over.
  9. Leverage evented query patterns for high-volume/high-latency external API calls.
  10. Provide queue-based APIs to maintain control of parallelism, time-outs, and QoS.

VII. Stay Frosty

Weave telemetry generation into all sub-systems and monitor behavior proactively.

  1. Model tasks granularly, emit standardized telemetry, and centralize its storage.
  2. Record current time, task state, triggering conditions, and queue and service time.
  3. Contextualize with code version, container lifetime, and container placement.
  4. Index stored telemetry to support breadcrumb trail exploration for forensics work.
  5. Leverage standardized telemetry to auto-generate metrics and alerts.
  6. Baseline behavior and track trends to control costs and manage capacity.
  7. Maintain awareness of outlier behavior to stay within your SLAs/SLOs.
  8. Pump synthetic data through all of your flows to prevent a false sense of security.

VIII. Assuage Regulators

Integrate and index metadata early to serve diverse compliance requirements.

  1. Attach standard security and compliance meta-data to every object.
  2. Authenticate recipients of data and ensure they possess adequate credentials.
  3. Include and index age-off meta-data to comply with data retention requirements.
  4. Establish protocols to recall inappropriately collected or incorrectly processed data.
  5. Maintain logs of all queries/actions and the ability to reconstruct answers and state.
  6. Ponder whether your software may eventually run in a different legal jurisdiction.
  7. Know that following the law is not enough to keep customers and regulators happy.
  8. Disentangle certification and accreditation to foster rigor _and_ efficiency.

IX. Defend Proactively

Bake security into your system early on to avoid embarrassment and expensive refactors.

  1. Keep components tightly focused to reduce attack surface.
  2. Keep components minimally permissioned to limit their blast radius.
  3. Separate ingestion, normalization, analysis, and action into discrete components.
  4. Discretely permission controls configuration, app deployment, and app operation.
  5. Build data stores that support fine-grained access based on security labeling.
  6. Today’s LAN is tomorrow’s SDN.  Encrypt all traffic to avoid getting burned.
  7. Combine physical security, encryption, and limited admin access for data at rest.
  8. Establish an authentication framework that will prove both secure and scalable.
  9. Avoid static credentials wherever possible and be mindful of where tokens end up.
  10. Favor short-lived, minimally equipped, dynamically credentialed processes.
  11. Enforce authorization at every point in the chain of custody.
  12. Catalog and control 3rd party software in your stack.  Monitor it for vulnerabilities.
  13. You will get compromised.  Aim to impede surveillance, spreading, and infil/exfil.

X. Nurture Trust

Set clear expectations and monitor associated metrics closely to maintain goodwill.

  1. Empathize with your users on what it would mean for your system to fail.
  2. Clearly communicate SLAs to consumers to manage expectations.
  3. Maintain SLOs with enough headroom to make failing to meet SLAs unlikely.
  4. Keep your eye on outlier performance and understand its impact.
  5. Support QoS faculties to protect and prioritize critical traffic.
  6. Proactively communicate integrity and performance issues to engender confidence.
  7. Maintain the tooling to quickly and precisely tell customers what went wrong.

XI. Foster Repeatability

Move fast _safely_  with systematized, phased, modular deployment mechanisms.

  1. Ensure that system clones can be constructed from scratch fully automatically.
  2. Strive never to touch production systems directly.
  3. Treat configurations and migrations with the same rigor as application code.
  4. Avoid massive, single-release, manually-intensive migrations like the plague.
  5. Pin the versions of your dependencies and test them like any other code.
  6. Pre-stage new storage engines and run comparisons in production before going hot.
  7. Make rollback a practical exercise for every release you perform.
  8. Use feature flags to prevent release pipeline traffic jams and enhance safety.

XII. Be Unkillable

Eschew single points of failure and test recovery procedures to prevent catastrophe.

  1. Deploy your system in a way that leverages multiple availability zones.
  2. Ensure that your source data is stored in a highly durable and available fashion.
  3. Make it easy to reconstruct a state store by re-processing source data.
  4. Regularly test the loss of system elements and observe system behavior.
  5. Regularly rebuild infra and data stores from scratch to validate the process.
  6. Employ multi-party control for the most sensitive operations.

Authors Note

Data Principles represents an attempt by Andrew W. Gibbs to codify the lessons gleaned through years of wrangling messy data-intensive problems.

This compendium of knowledge would likely not have emerged, and certainly not have been as approachable, without the encouragement and counsel of his perennial colleague Aaron Zollman.