SCADABLE
Platform feature

Run health checks on devices in the field, from your browser

Define tests in your GitHub repo. Trigger them from the console. Get results in seconds.

remote device diagnostics IoTIoT device health checkfield testing embedded devices

Your tests, your repo

Diagnostics on deployed hardware usually means a support engineer SSH'd into the gateway running ad-hoc commands. SCADABLE makes that a first-class workflow. You write the actual test in C using the SCADABLE_TEST macro — your firmware knows what 'healthy' looks like, not the cloud — and a sidecar manifest in `.scadable/diagnostics/` tells the platform when to run it and what counts as failing.

Sensor ping, connectivity check, battery health, calibration verify. Anything your firmware can self-report becomes a button in the console.

A test, in your code

c
#include "scadable.h"

// Customer-defined health check. Returns pass / pass_with_warn / fail
// plus an optional message. Library handles registration, triggering,
// timing, and result publish over MQTT.
SCADABLE_TEST(sensor_ping) {
    uint8_t found = 0;
    for (uint8_t addr = 0x48; addr <= 0x77; addr++) {
        if (i2c_probe(0, addr) == 0) found++;
    }
    if (found == 0) return SCADABLE_TEST_FAIL("no I2C sensors responding");
    if (found < 3) return SCADABLE_TEST_WARN("only %d of 3 sensors found", found);
    return SCADABLE_TEST_PASS();
}

A diagnostics manifest

yaml
# .scadable/diagnostics/sensor-ping.yaml — when + how + what counts as fail.
name: Sensor ping
test: sensor_ping             # matches the SCADABLE_TEST(...) macro name
description: Verify all I2C sensors respond
timeout: 5s
run_on:
  - boot                      # every cold boot
  - schedule: "@hourly"        # cron-ish cadence
  - manual                    # console "Run" button
gate_ota: true                # any fail blocks OTA promote on this device

How a check runs

  • Engineer clicks "Run sensor ping" against device #1042 in the console
  • SCADABLE pushes the test request over MQTT to that device
  • Device firmware executes the check using libscadable helpers
  • Result returns in 1 to 3 seconds, attached to the device timeline
  • Pass/fail thresholds open a ticket if a regression appears across the fleet

Why we built this

Nobody else does this exactly. Existing platforms give you logs and metrics. Those are passive signals. Diagnostics are active. You ask the device a question and it answers. Combined with the GitHub-repo workflow, you get the same code-review and version-control story you already have for firmware, applied to your test suite.

Bring your fleet onto SCADABLE.

Connect a repo, leave with a fleet you can manage from your codebase.

Let’s talk