Data Feed (Power BI / Excel)
Scheduled folder drop of CSV + JSON files. Point Power BI, Excel Power Query, your ERP, or your accountant at the folder and your dashboards refresh on their own.
What the Data Feed does
The Data Feed turns GlyphFex into a source of truth that flows into other tools without anyone having to click Export. On a schedule you control (every 5 minutes, every 15 minutes, hourly, or daily), GlyphFex writes 5 files into a folder of your choosing:
entries.csv— one row per jobentries.json— the same data as a structured JSON array with stable v1 schema (see below)quotes.csv— one row per quote-stage entry with quote line itemscustomers.csv— one row per unique customer (rolled-up summary)time_entries.csv— one row per clock in/out record from the Workstation Terminal
Any tool that can read a folder can consume this. Power BI’s folder connector, Excel Power Query, accounting systems with directory-watcher imports, custom in-house ERP integrations — they all just point at the folder and read.
How to enable the Data Feed
- Open Settings Hub → Data Feed (Power BI / Excel).
- Pick a destination folder. Local disk works, but for team setups put it on a network share so everyone’s tools can read from one canonical location.
- Pick a refresh interval — Every 5 minutes, Every 15 minutes, Hourly, or Daily. Lower is fresher; higher is gentler on disk and CPU. Daily is the right default if your downstream consumer is an end-of-day reporting workflow.
- Click Enable Data Feed. GlyphFex writes the 5 files immediately, then continues writing on the chosen interval as long as the app is running.
That’s it. As soon as the folder has its first set of files, point Power BI / Excel / your ERP at the folder.
Atomic writes — readers never see half-written files
The Data Feed uses a two-phase atomic write on every refresh:
- Write all 5 files with a
.tmpsuffix. If any write fails, the temp files are deleted and the existing “good” files are left untouched. Power BI keeps reading the previous snapshot. - Pre-flight check — before swapping in the new files, GlyphFex tries to open each target file with
FileShare.None. If any file is locked (e.g., Excel has it open), the write is skipped and the next refresh tries again. - Atomic rename. If the pre-flight passes, all 5 files are renamed from
.tmpto their final names. Readers will never see a half-written CSV.
This means you can have Power BI refresh on a 15-minute schedule and GlyphFex refresh on a 5-minute schedule, and the two never collide.
The v1 schema — stable forever
The entries.json file follows a v1 schema defined in the schema reference page. The contract is simple:
- Never remove a field. If
customer_nameexists in v1, it will exist in every future v1 release. - Never change a type. If
actual_hoursis a number in v1, it stays a number. - Never change a meaning. If
quote_outcomemeans “the result of a Won / Lost decision” in v1, it will not silently start meaning something else. - New fields are additive. When GlyphFex adds a new built-in field, it appears as a new key in v1; existing readers ignore unknown keys.
- Breaking changes ship as v2. If we ever need to redefine a field, that’s a new schema version — v1 keeps working alongside it for at least 12 months.
This is why the JSON file is the recommended source for any automation more durable than a one-off dashboard. CSVs are convenient for Excel and Power BI but lose nuance (e.g., tags become a comma-joined string instead of structured); JSON preserves the full structure.
Schema quick reference
Every entries.json object includes these top-level keys:
| Key | Type | Notes |
|---|---|---|
id | integer | Stable, project-scoped entry ID |
ref_number | string | The Job Number you assigned |
comments | string | Free-text description / notes |
status | string | Current pipeline stage |
pipeline_id | integer or null | Which pipeline this entry follows (null in single-pipeline projects) |
tags | object | Structured map: { "Material": ["Stainless Steel"], "Process": ["TIG Welding"] } |
key_fields | object | All 17 built-in fields always present, null for absent — stable column inference for Power BI |
custom_fields | object | Per-project custom field values keyed by field name |
attachments | array | Path-only metadata for files attached to this entry (paths to the original files on disk; the binary content is NOT included) |
audit_summary | object | Summary of audit trail: created_at, last_modified_at, total_edits, last_edit_by |
work_state | object | Roll-up of clock in/out: total_minutes, active_workers, in_progress, last_clock_event |
See the full v1 schema reference for every field name, type, and example payload.
Multi-user safety
If you run GlyphFex in team mode with multiple machines, you don’t want both PCs writing into the same folder at the same time. The Data Feed handles this with a writer election:
- The first PC to enable the feed writes a hidden
.feed.locksentinel file in the destination folder, with its hostname and a heartbeat timestamp. - Other PCs see the lock file and skip writing — they read the existing files like any other consumer would.
- The lock heartbeats every refresh interval. If the elected writer dies (PC powered off, app crash, etc.) and the heartbeat goes stale for 3× the refresh interval, the next PC to wake up takes over.
The lock file uses FileAttributes.Hidden so Power BI’s folder connector doesn’t try to ingest it as a data file.
Setting up Power BI
- In Power BI Desktop, Get Data → Folder.
- Browse to your Data Feed folder.
- Power BI shows the 5 files. Click Combine & Transform on
entries.jsonfor the richest dataset, or use the CSVs for quicker setup. - Power BI infers the columns from the JSON. Because the v1 schema always emits all 17
key_fields(null for absent), the column inference is stable across refreshes — new entries with sparser data never cause column drift. - Build your reports. Set scheduled refresh in Power BI Service to a cadence equal to or slower than the GlyphFex Data Feed cadence.
For Excel Power Query the same pattern applies: Data → Get Data → From File → From Folder.
When to use Data Feed vs. the alternatives
| Tool | Best for | Trade-off |
|---|---|---|
| Data Feed | Power BI, Power Query, ERP imports, accountant’s folder — anything unattended | Read-only export — not push notifications |
| Webhooks | Real-time event push (Zapier, n8n, in-house automation that should react to a save within seconds) | Event-by-event — not a full data snapshot |
| Live Excel Sync | You want Excel as the single source for office reporting and you’ll click Refresh manually | One workbook, one machine, manual refresh |
Common questions
Does the Data Feed include attachments?
It includes attachment metadata (filename, path on disk, MIME type, size) but not the binary content. The path lets a downstream tool open the original file if it has access to your file server.
How big can the entries.json file get?
For a 5,000-entry project the JSON is typically 5–15 MB. Power BI handles this in milliseconds; Excel Power Query handles it in 1–3 seconds on a typical office machine. For larger projects (50K+ entries) we recommend reading the CSVs directly or filtering on import.
Can I customize which fields are exported?
Not in v1 — the schema is fixed by design so downstream readers don’t break. If you need a custom column, derive it in Power Query / Power BI / Excel as a calculated column based on the existing fields.
What happens to deleted entries?
Hard-deleted entries disappear from the next snapshot. Soft-archived entries appear with archived: true. If your downstream tool needs to keep a history of deleted entries, persist snapshots yourself (e.g., copy the folder daily into a dated archive).
The feed isn’t writing — what now?
Check Settings Hub → Data Feed for the Last Run At timestamp and the most recent run status (Success / Skipped / Warning / Failed). The Settings Hub explains each result — Skipped usually means another PC holds the writer-election lock, Warning means one of the 5 files is locked by another reader, Failed includes the error message.