Every Convilyn workflow starts with a file and ends with a file. You upload a resume — you receive a tailored resume. You upload a job description — you receive an extracted requirements table. You upload a document — you receive a compliance report. Files in, files out.
This is not an arbitrary limitation. It's a design principle we chose deliberately, and it has shaped every architectural decision we've made since.
Files as the Universal Unit of Work
Files are the most universal format for representing meaningful work. Contracts are files. Resumes are files. Research papers are files. Marketing briefs are files. Financial reports are files. The vast majority of the high-value work that people do, and want help with, already exists as files or produces files as its output.
This matters because it means the boundary of each workflow is clear. You know what goes in and you know what comes out. There's no ambiguity about scope, no partial states that require the system to be connected to an external service to be meaningful, and no risk that the workflow produces something intangible and unmeasurable.
A file is concrete. You can inspect it, share it, version it, store it, and use it as input to the next workflow. Files compose naturally: the output of a resume-tailoring workflow can be the input to a cover-letter workflow. The output of a job description extraction workflow can be the input to a fit-scoring workflow. Each workflow is self-contained, and the outputs are fully portable.
What the Constraint Prevents
Constraining every workflow to file inputs and file outputs prevents a category of design mistakes that would make the platform significantly harder to build, validate, and trust.
It prevents scope creep into integration complexity. Many AI workflow proposals naturally lead to: "and then it sends the result to your email," or "and then it posts to your social media account." These are legitimate downstream applications of workflow output — but they introduce authentication, rate limiting, third-party API dependencies, and failure modes that have nothing to do with the workflow's core intelligence. By scoping to files only, we keep each workflow focused on the thing it is uniquely good at: the intelligent transformation of content.
The upstream connections (where the file comes from) and downstream connections (where the output goes) are deliberately not our problem in this phase. Your email client, your social media scheduler, your document management system — these are integration layers that can attach to the file output at any point. The workflow doesn't need to know about them to do its job.
It makes quality measurable. When a workflow produces a file, you can read it. You can evaluate it. You can build automated checks that verify the output meets quality criteria — the right sections are present, the required format is correct, the prohibited patterns are absent. The 2-tier validation framework we described in the engineering series depends entirely on this property. If workflow output were an API call to an external system, it would be far harder to verify correctness.
It enables reproducibility. When both the input and output are files, a workflow execution can be replayed exactly. The input file can be stored. The output file can be stored. The spec snapshot can be stored. If a user wants to understand why they received a particular output, or if we want to debug a quality regression, we have all the inputs needed to reproduce the execution.
The Main Task, and Nothing Else
The file-in-file-out constraint is the enforcement mechanism for a deeper principle: every workflow should solve exactly one main task, and solve it completely.
This sounds obvious, but in practice it requires active discipline. The natural tendency in product development is to add features at the edges: "while we're processing the resume, we could also check the LinkedIn profile," or "while we're extracting the requirements, we could also score the candidate against them." These additions feel like they add value, but they also add complexity, increase failure surface, and make the workflow's quality harder to measure and maintain.
A workflow that processes a resume and produces a tailored resume is solving one problem. Its quality can be evaluated by reading the output. Its failure modes are bounded. Its scope is clear to the user before they start.
A workflow that processes a resume, checks a LinkedIn profile, searches for open positions, sends an email to the recruiter, and posts a status update is solving five problems with five different quality dimensions, five potential external failures, and a scope that's hard to communicate in a single sentence. Even if each individual step is well-designed, the compound system is more fragile than the sum of its parts.
The "one main task" discipline, enforced by the file boundary, is what allows us to build nearly 100 workflows without each one becoming a complex integration project. It also means that when something goes wrong, you know exactly where to look.
Upstream and Downstream: Not Now, Not Never
Choosing to exclude upstream sources (WhatsApp messages, email attachments, social media posts) and downstream destinations (automated sends, calendar integrations, CRM updates) from this phase is not a permanent architectural decision. It's a sequencing decision.
Workflows need to be valuable before they need to be integrated. A resume-tailoring workflow that produces a great resume is useful even if you have to manually email it afterward. A compliance-checking workflow that produces an accurate report is useful even if you have to upload it to your document management system manually. The core value — the intelligence — doesn't depend on the integrations. The integrations multiply the value of something that already works, but they don't create value that wasn't there to begin with.
When we add upstream and downstream integrations, we'll add them to workflows that are already proven. The file interface becomes an adapter point: the integration layer translates from "message in a chat thread" to "file" before the workflow starts, and from "file" to "sent email" or "posted content" after it ends. The workflow itself doesn't change. The intelligence is preserved; the connectivity is added at the edges.
This architecture — proven core intelligence, pluggable integration edges — is significantly easier to build correctly than trying to design integrations into every workflow from the start. Files are the stable interface that keeps the two concerns separate.
Why Supporting 280+ Formats Matters
If the architecture is file-in-file-out, then the breadth of file formats the system supports determines the breadth of problems the system can address. A workflow platform that only handles PDFs can only solve PDF problems. One that handles 280+ formats — documents, images, spreadsheets, presentations, audio, video, data formats — can address the full range of file-based work.
The Turbo Lane conversion system exists precisely to make this possible. When a Goal Lane workflow needs to receive content in one format and deliver it in another — receive a DOCX, produce a PDF; receive an image, produce extracted text — the conversion capability is available as a reliable, low-cost operation. The intelligence layer doesn't need to handle format conversion. The infrastructure layer already does.
Format breadth is not a feature. It's a prerequisite for the workflow workshop to address the full diversity of what people actually work with. The person who works in spreadsheets has a different format than the person who works in PDFs, but both deserve AI workflows that speak their format fluently.
