Roadmap
Here's what we're thinking about building next.
Target: March
Subscriptions
The only feature in our roadmap that absolutely no users asked for ;)
Why are we doing it anyway?
- Helps new users understand free plan limits and estimate potential future costs.
- Forces us to talk about pricing with our users openly and often (as we should).
- Lets us adjust pricing strategy based on real usage and feedback.
- Replaces our slow, manual, zero-value billing process that doesn’t scale.
Where we’re at:
- Initial release in pre-production: seat-based fees working via Paddle.
What’s missing?
- No metered usage yet (i.e. runs and attachments packs).
- A clear UI to make billing structure easy to understand.
Target: 80% done
Onboarding
More and more teams are discovering TofuPilot through our docs and word of mouth, signing up for free accounts. We want to deliver a great onboarding experience, give them that "wow, it just works" moment, remove frustrations, and ensure they can explore the app for free and demo it internally.
What we’ve already done:
- New Welcome Aboard page.
- New Simulation Mode to run sample test scripts and explore the app without opening an IDE.
- Improved usage analytics to better track errors.
- Regrouped code snippets from docs into a GitHub examples repo, plus automated testing to keep them always working.
- Added a support contact form to our landing page.
What we still want to do:
- Add an in-app help UI to streamline issue/feature request reporting.
- Let users store serial regex formats in-app, so part numbers & revisions can be extracted without extra Python code.
- Improve the layout of our onboarding email.
- Create a welcome onboarding email sequence.
- Simplify the UI to make options clearer and more accessible.
- Transform our Python client into a real SDK (see below) to streamline integration.
What we’re still debating:
- Allowing two users on the free plan if the account creator isn’t the dev.
- Renaming components to parts to better match
part_number
andpart_name
API params. - Removing the
procedure_id
to avoid confusion withprocedure_name
.
Target: April
Stations
Test scripts are usually deployed on a remote PC, either on the OEM’s production line or at a supplier’s facility. There needs to be a secure way for these scripts to upload test data while maintaining strict permissions, such as allowing uploads only for selected procedures.
We aim to create a simple and efficient flow for test developers to: create stations easily, authenticate them securely and manage their lifecycle.
Since each run will be linked to its test procedure, this will also enable filtering runs by station, comparing station performance, and, in the future, tracking efficiency and uptime statistics.
The setup flow we’re building:
- Create a station from the UI → opens its details page.
- Link it to one or more procedures.
- Copy the API key from the station page.
- Install it on the station computer.
- Upload runs for assigned procedures.
- See & filter runs by station in the UI.
Target: March
Python SDK
Our open-source Python client wraps our REST API, so you don’t have to. It includes direct storage uploads for better performance, OpenHTF one-line integration, and offline upload helpers. It works well, but we want to make it even simpler to integrate into any test script, evolving from a basic client to a full SDK.
What we want to do:
- Upload queue to handle cases where the test environment is offline
- Review API parameter consistency, simplifying where possible (e.g., using native Python types for durations)
- Improve console logging with clearer messages, links to the TofuPilot app, and direct error references in documentation
- Smoother onboarding with a CLI-like experience to prompt for missing API keys and proactively resolve issues
- Auto-generate the Python client structure from our OpenAPI spec, enabling future SDKs (C#, Rust, etc.)
- Local config file support to offload parameters such as procedure ID from test scripts
- Higher-level functions for structured logging of phases and measurements instead of requiring users to build a final upload structure manually
- Real-time streaming of test phases and measurements in non-OpenHTF Python scripts
- Review dependencies to keep the package as lightweight as possible
What we’re not sure about yet:
- Asynchronous test run uploads after execution
- Anonymous usage analytics collection in Python to better understand usage patterns, needs, and issues
Target: Early Q2
User Roles
Test developers aren’t the only ones who need access to test data and reports in TofuPilot. Operators, maintenance technicians, line managers, and design engineers might also need visibility. We want to ensure they can securely access the information they need without compromising data integrity.
What we want to do:
- Ability to assign user roles from a predefined set of options
- Link users to specific procedures based on their role, restricting read/write access as needed
- Adapt our pricing to support these account types
- Take the opportunity to improve the authentication flow, adding support for more OAuth providers like GitHub and GitLab
What we’re not sure of yet:
- The exact roles and permissions needed: pre-defined roles (Operator, Technician, Developer, Admin) or fully customisable permissions per user instead?
- Should we enable password login?
- How to make it easy for operators to log in if they use shared accounts on production computers?
Target: Q2
Operator UI
When deployed to production, test scripts often need an operator UI to let production operators start tests, scan serial numbers, and track results. Frameworks like OpenHTF include a basic operator UI, but many engineering teams still end up rebuilding their own version repeatedly with no real added value.
We want to provide a prebuilt Operator UI directly in our web application, with native support for OpenHTF and simple Python functions in our SDK to make it easy for teams using custom syntax.
What you’ll be able to do in our UI
- Scan serial numbers
- Track phases and measurements in real time
- Easily access test history
- Prompt operators for input when needed
Target: Q2
Multi-Dimensional Measurements
Many test setups require capturing structured data beyond simple scalar values. OpenHTF supports multi-dimensional measurements, allowing test developers to record values across multiple dimensions, such as time, voltage, and current, in a structured way. We aim to fully support OpenHTF’s multi-dimensional measurements and make them easy to store, query, and visualize in the app.
What we want to do
- Store and display OpenHTF multi-dimensional measurements without requiring flattening or manual restructuring
- Enable other test scripts to define measurements with multiple dimensions, such as time series or multi-unit parallel tests
- Support querying and filtering multi-dimensional data in the UI for better analysis
- Ensure compatibility with OpenHTF’s
with_dimensions()
API to seamlessly integrate with existing test scripts
What we’re not sure of yet
- The best way to visualize multi-dimensional data in the UI
- How to handle very large datasets efficiently
- Whether to introduce custom UI components for exploring multi-dimensional measurements dynamically
Target: Q2
Odoo Integration
Many hardware teams use Odoo to manage production, inventory, and quality control. Integrating TofuPilot with Odoo will help bridge the gap between test data and business processes, ensuring test results seamlessly feed into ERP workflows.
What we want to do
- Enable automatic test data syncing between TofuPilot and Odoo
- Support linking test runs to manufacturing orders, batches, or serial numbers in Odoo
- Push test results and pass/fail status to Odoo for better traceability
What we’re not sure of yet
- Allow Odoo to trigger test procedures in TofuPilot?
- How to handle cases where Odoo and TofuPilot store conflicting information
Target: Throughout 2025
Improved Analytics
TofuPilot already provides great test analytics, but we want to make them even better with continuous user feedback. The goal is to provide deeper insights, more flexibility, and a smoother experience for analyzing test results.
What we want to do
- Include throughput, second, last pass yield for better production insights
- Improve the filtering system and UI to make it easier to drill down into test data
- Enhance data visualization to highlight key trends and anomalies faster
What we’re not sure of yet
- How to balance customizability vs. simplicity in analytics views
Target: Q2
Instruments
Test results depend not just on the test scripts but also on the instruments used to measure them. We want to add instrument tracking to ensure better traceability, calibration tracking, and metrology compliance directly from TofuPilot.
What we want to do
- Allow test scripts to log instruments used in each test run (e.g., serial number, calibration date)
- Enable instrument traceability by linking test results to specific hardware
- Provide warnings for out-of-calibration instruments to prevent bad data
- Support instrument tracking in analytics to compare results based on measurement sources
What we’re not sure of yet
- How detailed the instrument logging should be by default
Help shape TofuPilot.
Tell us what’s missing, what to improve, and what you need.