Data Feed (Power BI / Excel)

Scheduled folder drop of CSV + JSON files. Point Power BI, Excel Power Query, your ERP, or your accountant at the folder and your dashboards refresh on their own.

What the Data Feed does

The Data Feed turns GlyphFex into a source of truth that flows into other tools without anyone having to click Export. On a schedule you control (every 5 minutes, every 15 minutes, hourly, or daily), GlyphFex writes 5 files into a folder of your choosing:

Any tool that can read a folder can consume this. Power BI’s folder connector, Excel Power Query, accounting systems with directory-watcher imports, custom in-house ERP integrations — they all just point at the folder and read.

How to enable the Data Feed

  1. Open Settings Hub → Data Feed (Power BI / Excel).
  2. Pick a destination folder. Local disk works, but for team setups put it on a network share so everyone’s tools can read from one canonical location.
  3. Pick a refresh intervalEvery 5 minutes, Every 15 minutes, Hourly, or Daily. Lower is fresher; higher is gentler on disk and CPU. Daily is the right default if your downstream consumer is an end-of-day reporting workflow.
  4. Click Enable Data Feed. GlyphFex writes the 5 files immediately, then continues writing on the chosen interval as long as the app is running.

That’s it. As soon as the folder has its first set of files, point Power BI / Excel / your ERP at the folder.

Atomic writes — readers never see half-written files

The Data Feed uses a two-phase atomic write on every refresh:

  1. Write all 5 files with a .tmp suffix. If any write fails, the temp files are deleted and the existing “good” files are left untouched. Power BI keeps reading the previous snapshot.
  2. Pre-flight check — before swapping in the new files, GlyphFex tries to open each target file with FileShare.None. If any file is locked (e.g., Excel has it open), the write is skipped and the next refresh tries again.
  3. Atomic rename. If the pre-flight passes, all 5 files are renamed from .tmp to their final names. Readers will never see a half-written CSV.

This means you can have Power BI refresh on a 15-minute schedule and GlyphFex refresh on a 5-minute schedule, and the two never collide.

The v1 schema — stable forever

The entries.json file follows a v1 schema defined in the schema reference page. The contract is simple:

This is why the JSON file is the recommended source for any automation more durable than a one-off dashboard. CSVs are convenient for Excel and Power BI but lose nuance (e.g., tags become a comma-joined string instead of structured); JSON preserves the full structure.

Schema quick reference

Every entries.json object includes these top-level keys:

KeyTypeNotes
idintegerStable, project-scoped entry ID
ref_numberstringThe Job Number you assigned
commentsstringFree-text description / notes
statusstringCurrent pipeline stage
pipeline_idinteger or nullWhich pipeline this entry follows (null in single-pipeline projects)
tagsobjectStructured map: { "Material": ["Stainless Steel"], "Process": ["TIG Welding"] }
key_fieldsobjectAll 17 built-in fields always present, null for absent — stable column inference for Power BI
custom_fieldsobjectPer-project custom field values keyed by field name
attachmentsarrayPath-only metadata for files attached to this entry (paths to the original files on disk; the binary content is NOT included)
audit_summaryobjectSummary of audit trail: created_at, last_modified_at, total_edits, last_edit_by
work_stateobjectRoll-up of clock in/out: total_minutes, active_workers, in_progress, last_clock_event

See the full v1 schema reference for every field name, type, and example payload.

Multi-user safety

If you run GlyphFex in team mode with multiple machines, you don’t want both PCs writing into the same folder at the same time. The Data Feed handles this with a writer election:

The lock file uses FileAttributes.Hidden so Power BI’s folder connector doesn’t try to ingest it as a data file.

Setting up Power BI

  1. In Power BI Desktop, Get Data → Folder.
  2. Browse to your Data Feed folder.
  3. Power BI shows the 5 files. Click Combine & Transform on entries.json for the richest dataset, or use the CSVs for quicker setup.
  4. Power BI infers the columns from the JSON. Because the v1 schema always emits all 17 key_fields (null for absent), the column inference is stable across refreshes — new entries with sparser data never cause column drift.
  5. Build your reports. Set scheduled refresh in Power BI Service to a cadence equal to or slower than the GlyphFex Data Feed cadence.

For Excel Power Query the same pattern applies: Data → Get Data → From File → From Folder.

When to use Data Feed vs. the alternatives

ToolBest forTrade-off
Data Feed Power BI, Power Query, ERP imports, accountant’s folder — anything unattended Read-only export — not push notifications
Webhooks Real-time event push (Zapier, n8n, in-house automation that should react to a save within seconds) Event-by-event — not a full data snapshot
Live Excel Sync You want Excel as the single source for office reporting and you’ll click Refresh manually One workbook, one machine, manual refresh

Common questions

Does the Data Feed include attachments?

It includes attachment metadata (filename, path on disk, MIME type, size) but not the binary content. The path lets a downstream tool open the original file if it has access to your file server.

How big can the entries.json file get?

For a 5,000-entry project the JSON is typically 5–15 MB. Power BI handles this in milliseconds; Excel Power Query handles it in 1–3 seconds on a typical office machine. For larger projects (50K+ entries) we recommend reading the CSVs directly or filtering on import.

Can I customize which fields are exported?

Not in v1 — the schema is fixed by design so downstream readers don’t break. If you need a custom column, derive it in Power Query / Power BI / Excel as a calculated column based on the existing fields.

What happens to deleted entries?

Hard-deleted entries disappear from the next snapshot. Soft-archived entries appear with archived: true. If your downstream tool needs to keep a history of deleted entries, persist snapshots yourself (e.g., copy the folder daily into a dated archive).

The feed isn’t writing — what now?

Check Settings Hub → Data Feed for the Last Run At timestamp and the most recent run status (Success / Skipped / Warning / Failed). The Settings Hub explains each result — Skipped usually means another PC holds the writer-election lock, Warning means one of the 5 files is locked by another reader, Failed includes the error message.