Background
Alex had been running a crypto news and analysis site for two years. The fundamentals were solid — good domain authority, a loyal readership, solid monetisation through affiliate deals and display ads. What he didn’t have was the editorial firepower to keep pace with the market.
Crypto moves faster than any other beat in digital media. A protocol vulnerability drops at 3am. A central bank hints at a CBDC. A token does a 4x in 36 hours. Every one of those moments is a content opportunity — and a missed window if you’re not publishing within hours. Alex’s site needed fresh, SEO-optimized articles daily to stay visible in search and relevant to readers who had no shortage of other places to go.
He’d tried the obvious solutions. Freelance writers produced inconsistent work and needed constant briefing. Staff writers were expensive and still couldn’t scale to the volume he needed. Generic AI tools gave him articles that read like they’d been written by a robot that had never heard of DeFi — technically passable, practically useless for ranking. Nothing he tried gave him volume, quality, and speed at the same time.
That’s when he started looking for something purpose-built.
The Challenge
Crypto content has a specific set of demands that make it harder to automate than most niches. It needs to be accurate — people make real financial decisions based on what they read. It needs to be timely — a piece about a token launch published three days late is worth almost nothing. And it needs to be SEO-optimized enough to compete against CoinDesk, Decrypt, and Cointelegraph, which have full editorial teams and years of domain authority behind them.
Generic AI content tools weren’t built for this. Most large language models have a knowledge cutoff — they can’t tell you what’s happening to ETH gas fees today or what the market reaction to last week’s Fed statement was. The articles they produce are structurally fine but informationally stale. In a niche where stale is worthless, that’s not a solution.
Alex also needed control without complexity. He wanted to be able to specify tone, article length, language, and optional features — FAQs, key takeaways, embedded videos — without managing a team of writers or touching a piece of content himself. The system needed to understand his site, his audience, and his editorial standards, and execute against them automatically.
Solutions
Vume designed and built a fully automated content pipeline on Make.com, connected to Perplexity AI’s real-time search models and Alex’s WordPress installation. The system takes a topic and a set of parameters as inputs and returns a fully formatted, publish-ready article — no human in the loop required.
- Webhook trigger layer — A custom webhook accepts the article topic alongside a full parameter set: language, tone of voice, communication style, article size, target country, and optional content modules (key takeaways, conclusion, FAQs, in-article images, featured image, YouTube embeds). Alex’s team triggers new articles via a simple API call from their editorial calendar tool.
- SEO outline generation (Perplexity sonar-pro) — The first AI call generates a structured JSON outline with H2 headings and section descriptions. Perplexity’s sonar-pro model was chosen specifically because it searches the live web as part of every completion — meaning outlines are built around what’s actually ranking and trending right now, not a knowledge snapshot from six months ago.
- Section-by-section content writing (Perplexity sonar-pro) — The pipeline loops through each outline section independently, prompting sonar-pro to write focused, HTML-formatted content for each heading. Each call is context-aware — the model sees the full outline so sections don’t overlap or contradict each other. Live web search is active on every call, keeping all data points, prices, and references current.
- Optional content modules — Key takeaways, conclusions, and FAQ sections with JSON-LD schema markup are generated conditionally based on the input parameters, using Perplexity’s sonar model. Each is injected into the article at the appropriate position via comment placeholders replaced during assembly.
- Automated image generation — In-article images and featured images are generated via a custom image generation API, with prompts derived from the article’s SEO keyword, tone, and company context. Images are injected inline between sections.
- YouTube video embedding — When enabled, the pipeline queries the YouTube Data API v3 for the most relevant video to the article’s keyword and embeds it automatically above the conclusion.
- WordPress publishing — The assembled HTML is pushed directly to Alex’s WordPress site via the REST API, complete with category assignment, meta fields, and featured image — ready to review or auto-publish depending on his workflow.
The architecture decision that mattered most was the choice of Perplexity sonar-pro over a standard LLM. In a niche like crypto, where data changes hourly, using a model without live web access produces articles that are structurally correct but informationally wrong. Sonar-pro searches the web on every generation call, which means articles reference real current prices, recent protocol updates, and live market sentiment — the kind of specificity that earns trust from readers and signals quality to search engines.
The parameterized input system also gave Alex something he hadn’t had before: consistent editorial control without manual involvement. He could specify that a long-form analysis piece needed a formal tone, FAQs with schema markup, and three in-article images — or that a quick news item needed a short format, no extras, and a punchy direct style. The system executed against those specs every time, without briefing a writer or reviewing a draft.
Before this, we were maybe publishing three or four articles a week, and half of them needed editing before they went live. Now we're putting out eight to ten pieces a day, and they're better than what we were writing manually — because they're pulling real data. I had a piece go up about a Solana ecosystem token within two hours of a major announcement, and it ranked on the first page by the next morning. That just wasn't possible before. My team doesn't write anymore. They decide what to cover and the system does the rest.
Alex R., Founder, crypto news & analysis site
Key Outcomes
- 8–10 articles published per day, fully automated — up from 3–4 per week produced manually
- Organic search traffic increased by 340% within 90 days of launch, driven by higher content volume and real-time data accuracy that reduced bounce rates
- Average time to publish after a news event dropped from 24–48 hours to under 2 hours, capturing search intent at its peak
- Cost per article reduced by over 80% compared to freelance rates, while publishing velocity increased by more than 15x
- Zero editorial overhead — no briefing, no drafts to review, no revisions. The team went from managing writers to managing topics
- SEO-structured output by default — every article includes optimized H2 structure, optional FAQ schema markup, and keyword-aligned headings, without any manual SEO work
- Full editorial control retained — tone, article length, language, content modules, and publishing schedule all configurable per article via a single API call
writing
get in touchNeed help automating your content strategy ?
Have a project in mind, a question, or just curious about what AI can do for your business? Drop us a message. We read every submission and get back to you within 24 hours — no sales pressure, just a real conversation.
Call Center
Our Location
Paris, France
Lisbon, Portugal


