PINGDOM_CHECK
Light
Dark

The DQ playbook: How ‘data quality’ fuels business’ pursuit of precision

Read Time
2 mins
Posted on
August 14, 2025
The practice of data quality (DQ) is emerging as a key discipline businesses can use to understand and improve the provenance of the content they collect.
By
Theresia Tanzil
Table of Content

When web data powers your business, every fragment can affect your future.


Often, when companies collect public web data – whether it is for ecommerce monitoring, real estate analytics or more – they measure the success of that task with one simple question: did the process result in any data, or not?


But that question overlooks one key truth – while web data extraction may succeed in yielding output, that output may nevertheless be incomplete, inaccurate or otherwise substandard.


To those who depend on it, good web data is never binary. Imagine a price monitoring project in which a silent glitch turns $109.99 into $10.99. One missed character can generate faulty insights that derail corporate trust, customer usability or forecast revenue.

Data quality to the rescue


Fortunately, the practice of data quality (DQ) is emerging as a key discipline businesses can use to understand and improve the provenance of the content they collect.


Data quality isn’t a wholly new concept.



Today, these frameworks guide everything from traditional data work to modern web data collection.


You can think of data quality for web data collection as an expression of quality assurance (QA) broadly.


  • In manufacturing, QA has helped factory managers move from “We built all the cars” to “We built cars that pass a statistical inspection.”

  • In computing, software teams evolved from “It compiles” to “Every build passes thorough unit, integration, and user tests.”


Web data teams are now on the same journey – away from simply the task of scraping, toward repeatable, audited data pipelines that achieve desired results with minimal error.


A data quality mindset prioritizes not only data collection itself but ensuring that obtained data is fit for use the core tenet of the first phase in the TDQM Cycle.

Quality is in the eye of the beholder


Key to achieving data quality is routinely inspecting your data. “You don’t know your car is broken until it breaks down,” says Zyte QA data scientist Artur Sadurski. “The further you go without inspecting the data, the harder it bites.”


So, give your data engine an inspection every once in a while to make sure it’s running effectively.


But what exactly are you inspecting for? In a case study paper, MIT professor Richard Y. Wang and colleagues made the case that each company must choose a definition that is appropriate to its goals, its industry, and its internal culture.


It is important to declare what is, and is not, important to your use case. 


Embarking on a data quality odyssey, then, depends on deciding what is important to you and the data you are gathering.

Planning for data quality


How exactly can you decide what constitutes “quality” for your business?


Zyte’s Sadurski recommends expressing your quality threshold using five data dimensions:


  1. Accuracy: does the captured value match the value on the page?

  2. Completeness: do we have all the records and fields we can get?

  3. Validity: do values conform to agreed‑upon formats?

  4. Consistency: do values stay consistent across records or between extracts?

  5. Timeliness: is the data delivered in a timely manner to be valuable?


Because each of these dimensions may be more or less critical to you, Artur says you can define your project’s “data quality” by thinking of the criticality of each dimension as though on a sliding scale.

“The sooner you think about something, the less surprised you will be by the outcome,” he says.


“Ask yourself: ‘What dimensions do we need to focus on? What do we care more about – the timeliness or the consistency? And what are we going to measure as the quality metric? All of that needs to be agreed upon while planning.”


What’s achievable in data quality is always a product of push, pull, and balance between each of these goals. For example:


  • A retail pricing team launching dynamic discounting needs razor-sharp accuracy on price fields every morning, but perfectly-formatted product descriptions may be merely a nice-to-have.

  • Product managers studying competitor feature gaps care more about completeness of attribute lists than millisecond-fresh information.

  • Executives monitoring market share can tolerate a 24-hour delay, but not landscape gaps that misrepresent their company’s share.


The only “right” mix is the one that meets your tolerance for error and delay. So, decide with stakeholders which data fields and which dimensions are mission-critical, and which may be merely desirable.

Activating data quality


Now that you know how to think about DQ, how can you actually implement it in your organization?


1. Show what ‘good’ looks like


Just as you must intentionally conceive of what “quality” means to you, it is crucial to communicate your target data needs to your data collection team, clearly and in detail.


“Show” can be more impactful than “tell”, so give your team an example of what good data looks like. For instance, provide a spreadsheet of mock records, annotated screenshots, or video walkthroughs of the target pages, to show exactly what data to extract from each layout, as well as how it should be parsed and formatted.


Clearly illustrating your needs in this way leaves your team with absolute clarity, helping avoid mid-project discoveries of substandard results.


2. Define your dream schema


Your requirements shouldn’t remain in the realm of illustration alone. Data quality is achieved when you actively specify the finer points of your requirements.


Define and document your rules of the road – for example, fields that are required, those which are optional, whether a missing field should be collected as an empty string or omitted entirely, whether currencies should be converted during scraping or later.


The definition of such rules is called a “schema” – documenting the structure, data types and value restrictions for your data – and is expressed in a standardized data specification format such as JSON Schema.

{
  "$id": "https://example.com/inventory-item.schema.json",
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "title": "InventoryItem",
  "type": "object",
  "properties": {
    "productName": {
      "type": "string",
      "description": "Name of the product.",
      "minLength": 2,
      "maxLength": 100
    },
    "inStock": {
      "type": "boolean",
      "description": "Whether the item is currently in stock."
    },
    "quantity": {
      "type": "integer",
      "description": "Number of items available in stock.",
      "minimum": 0,
      "maximum": 10000
    },
    "price": {
      "type": "number",
      "description": "Unit price of the product in USD.",
      "minimum": 0,
      "maximum": 10000
    }
  }
}
Copy

Such a schema articulates the requirements for your output, to data engineers and systems alike, serving as both the map for your data journey and the enforcer for your data project.


3. Monitoring for data quality


With the specifications in place and web data flowing in, your team can configure different variables and checks to actually monitor for DQ.


In web data extraction, a “monitor” is like a “test case” in software development at large – a set of instructions for ensuring proper functioning.


Teams using Scrapy, one of the world’s main software frameworks for web data extraction, can also harness the Spidermon monitoring framework, a perfect complement for verifying extracted data against your mandated schema.


Spidermon supports data validation against JSON Schema rules and can trigger alerts based on data collection that breaches rules or even fails.


4. Collecting DQ indicators


With monitors in place, what signals actually add up to data quality? Your team can check for quality by testing extracted data against our five dimensions.


Accuracy: validating against the webpage
Does the data being output match the actual content on the target page, or was it spoiled during extraction? Your team can calculate the match rate by running a side-by-side visual comparison of a representative sample of records. 


Completeness: validating against the website
Did extraction get all the target records? Overall completeness compares the latest collected record count with the expected number.Field completeness shows how often each field is populated in the collected data. Summary stats of key fields like can be reported in the crawl job metadata to make it easier to spot issues.


Validity: validating against the format requirements
Do extracted data fields appear in the correct format? Each field is tested against the rules supplied in your schema to calculate the percentage of rows that conform to the agreed patterns.


Consistency: validating across records
Do values stay consistent across records or between extracts? What if reviews you are extracting move from a five-star system to a 10-point scale? Calculate z-scores—a measure of how far a value is from the mean—for key numeric fields to spot outliers.


Timeliness: validating against the schedule
Is the data delivered in a timely manner to be valuable? The recorded delivery completion timestamp is compared to the target delivery time. Calculate the current job's completion status—on time or not—against a number of total deliveries so far to get a percentage success rate.


By testing the indicators inherent in your data against these yardsticks, you can begin to get a real sense for your data’s provenance.


5. Scoring data quality


What separates a one-off quality inspection from a systemic commitment to improving data quality is ongoing measurement and the refinement it enables.


Many people have a gut sense that they know “quality” when they see it. But what gets measured gets managed.


Gartner estimates that poor quality data cost organizations an average of $15 million a year, yet 59% of organizations do not measure data quality.


MIT’s TDQM Cycle recommends you “develop a set of metrics that measure the important dimensions of data quality for the organization and that can be linked to the organization’s general goals and objectives”.


There is no single, universally agreed-upon score for data quality across all industries. But, having monitored to obtain the key indicators above, you can create a simple weighted average to your overall data quality.

Dimension
How to calculate
Weight assigned
DQ score reported

Accuracy

(correct values ÷ values checked) × 100%

0.4

95%

Validity

(records passing rule set ÷ total records) × 100%

0.1

92%

Completeness

Field: (populated fields ÷ expected fields) × 100%  

Record: (records scraped ÷ records expected) × 100%

Results is an average of both.

0.15

84%

Consistency

(consistent records ÷ total records) × 100%

0.05

90%

Timeliness

(on‑time deliveries ÷ total deliveries) × 100%

0.3

75%

OVERALL DQ SCORE

Aggregate of each dimension’s weighted score.

 (0.40×95) + (0.1×92) + (0.15×84) + (0.05×90) + (0.3×75)

86.8%

By adding together the weight-adjusted individual scores, you can generate a highly useful DQ rating for each of your data’s five dimensions – a method that can be applied to every extraction and graphed on an ongoing basis.


By combining these component scores, you can produce a highly impactful top-line DQ score for your entire pipeline.


It is also easy to see how banding your main DQ score by numeric range can create a colour-coded health indicator for your entire data pipeline (e.g., ≥ 90% = green, 81-89% amber, < 80% red), displayable in a status dashboard to your entire team.


By publishing both the aggregate and underlying metrics, it becomes easy to identify which dimension may be causing slippage that impacts overall data quality.

Let profit dictate the pipeline


Web data quality is never as simple as an on-off switch. It’s more like a slider. But you can control that slider.


When scraping operations are run through the lens of “data quality” practices – some old, some novel – the pay-off is immediate confidence.


Let knowledge and evidence of your data quality drive the next conversation with your stakeholders: “Here’s the quality level we’ve reached, the business decisions it now supports, and the competitive upside we’ve unlocked as a result.” Establish feedback loops to adjust weights, refine expected deliverables, or expand the schema as your market questions evolve.


The organizations that win the next pricing war or product race won’t be the ones who collected the most data; they’ll be the ones who proved their data was worth acting on.

×

Try Zyte API

Zyte proxies and smart browser tech rolled into a single API.