Technology

From workshop to deployment: how to choose processes that AI will really improve

Agnieszka Ułaniak
Marketing Manager, Altimi
January 30, 2026
2
min read

AI can deliver spectacular results… but only when it hits the right process and has the conditions to operate (data, tools, owner, KPIs, risk control). Otherwise it ends with a “nice demo” and disappointment, because we automate something that:

  • is rare and non‑standard,
  • has too high an error risk,
  • has no measurable impact,
  • requires data that the company does not have or cannot use.

This guide shows a practical path: workshop → selection → pilot → production deployment. So that AI really shortens time, reduces costs and improves quality – and does not just “look impressive”.

Why process selection is key (and where the biggest potential usually is)

McKinsey research indicates that the largest share of the value of generative AI is concentrated in areas such as customer operations, marketing & sales, software engineering and R&D.
This is a good hint where to look for “AI‑friendly” processes, but you still need to go one level lower: from functions (e.g. “customer service”) to specific workflows and tasks (e.g. “ticket triage + suggesting responses + CRM update”).

Step 0: first decide what type of improvement you want to achieve

In practice, AI implementations fall into three categories (and each requires different processes):

  • Copilot / assistance – AI supports a human (creates drafts, summaries, suggested responses).
  • Automation (end‑to‑end or partial) – AI performs process steps on its own, often with human‑in‑the‑loop control.
  • Decisions / predictions – AI recommends a decision (e.g. priority, risk, routing), but usually does not “close” the process itself.

The safest start is usually assistance + semi‑automation, because they deliver value faster and are easier to manage.

Step 1: a process discovery workshop that does not end with a “wish list”

A good workshop ends with a map of candidates + data for evaluation, not just “brainstorming”.
Participants: process owner, operational person (performing tasks), IT/DevOps representative (integrations), security/GDPR specialist (data), person responsible for metrics and analytics (finance/analytics).

Workshop outputs:

  • 10–30 candidates (processes or subprocesses),
  • for each: goal, inputs/outputs, systems, volume, variants, errors, risks, “bottlenecks”.

How to quickly gather the truth about a process (instead of opinions)

If you can, base discovery on data:

  • Process mining (event logs from systems) and task mining (what people actually click on the desktop) show the actual course of the process and where there are errors, delays and automation potential. Microsoft describes this as a way to “X‑ray” the process and identify opportunities for automation.
  • Process‑mining tools (e.g. from the RPA ecosystem) emphasize that this is an alternative to “interviews and meetings”, which often give an incomplete picture.

Step 2: process selection – scoring that works in the real world

Instead of discussing “which process is cool”, use a simple 1–5 scoring on key dimensions. Then choose the 2–3 best ones for the pilot program.

A. Business value (Impact)

Assess:

  • how many hours per month the process consumes,
  • how much errors cost (rework, complaints, losses),
  • whether it affects revenue (conversion, churn),
  • whether it unblocks scaling (more customers without hiring).

Hint: McKinsey points out that a “use case” should have a measurable outcome (e.g. cost reduction, revenue increase).

B. Repeatability and standardization

AI best improves fragments that are:

  • frequent,
  • have a similar pattern,
  • have clear boundary rules (when to escalate).

C. Data and access (Data readiness)

Check:

  • whether data exists (ticketing/CRM/ERP/logs),
  • whether it is of sufficient quality (consistent fields, little unstructured “free text”),
  • whether it can be used legally and safely.

D. Integrations and feasibility (Feasibility)

Helper questions to assess “implementation difficulty”:

  • How many systems will need to be integrated (and how interdependent they are).
  • Whether APIs or other integration points exist, or everything will require workarounds.
  • Whether it is possible to operate “on the side” – e.g. start in assistant mode, shadow mode or a parallel process before entering the “main stream” of operations.

E. Risk and control (Risk)

Here security, compliance and reputation come into play. “Shadow AI” (using AI tools without supervision) is a real problem – Gartner forecasts that by 2030 more than 40% of organizations will experience security or compliance incidents for this reason, so policies and education are key.

A simple way to choose a candidate: first aim at processes with high Impact and high Feasibility but low Risk – those should be first in the pilot.

Step 3: define a “thin slice” – minimal implementation that delivers KPIs

Most common mistake: trying to improve the entire process at once.

A better pattern:

  • choose one stage of the process (e.g. “ticket triage”),
  • define 1–2 KPIs (e.g. time to first response, % of correct routings),
  • build a solution that will show a trend within 2–6 weeks.

OpenAI, in its guide on identifying and scaling use cases, emphasizes that AI adoption is more than finding an idea – you need a process that leads from value and leadership to deployment and scale.

Step 4: design quality control and “safety rails” from day one

If AI is to affect the customer, finances or security, set clear rules: who approves use, what data may be processed, what quality controls apply and which risk metrics you monitor.

  • human‑in‑the‑loop (a human approves),
  • confidence thresholds (below threshold → escalation),
  • quality monitoring (e.g. sampling 50 cases per week),
  • logging decisions (auditability),
  • data policy (what can go to the model / what cannot).

This also realistically limits the risk of “shadow AI”, because it gives people a legal, safe usage path.

Step 5: only then “scale” – standardize and replicate the pattern

Scaling is not about “adding more prompts”. It is about:

  • a standard for integration,
  • a library of components (e.g. classification, extraction, generation, validation),
  • shared KPIs,
  • a process for deploying changes and tests.

A good practice is a discovery process based on mining: process mining shows how the process really flows through systems and where losses occur (bottlenecks, unnecessary steps), and task mining reveals how people do the work on the desktop and which activities are suitable for automation.

Examples of processes that AI usually improves “for real”

Below are inspirations that often score well:

Customer service / helpdesk

  • ticket classification and routing,
  • summarizing context from customer history,
  • suggesting responses + links to the knowledge base,
  • detecting escalation/churn risk.

Sales and marketing

  • personalized email/offer drafts,
  • conversation summaries and CRM updates,
  • lead qualification (assistance, not “autopilot”).

Software engineering

  • generating tests, documentation, assisted refactoring,
  • bug triage, incident summaries.

This correlates with the areas of highest value indicated in McKinsey’s analysis.

FAQ

FAQ: how to choose processes that AI will really improve

How to recognize that a process is “not suitable” for AI (yet)?

If it has low volume, high data chaos, no KPI owner or a very high error risk without control options – it is better to start elsewhere or first clean up the data.

Should AI automate the process end‑to‑end right away?

Rarely. Most often a better start is assistance or semi‑automation with human approval, and only then increasing autonomy once you have data on quality.

What data are minimally needed for a sensible pilot?

It is enough to have enough data to measure KPIs and train/calibrate the solution: process logs, historical tickets, case descriptions, outcomes (e.g. whether routing was correct). Process mining / task mining can help “pull the truth” about the process from logs and user actions.

How to avoid “shadow AI” and data leakage into tools?

Give teams a safe alternative (approved tools, data rules, monitoring), not just a ban. Gartner warns that unauthorized use of AI can lead to security and compliance breaches.

How long should a pilot last to make sense?

Most often 2–6 weeks for a “thin slice”. More important than time is whether you have: KPIs, an owner, access to data and the ability to deploy on a real fragment of work.

Is process mining necessary?

Not always, but it greatly helps avoid decisions based on opinions. If you have logs in systems (CRM/ERP/ticketing), process mining will more quickly show bottlenecks and process variants.

Which KPIs most often show the real value of AI?

Cycle time, time to first response, cost per case, % of errors/rework, customer satisfaction, team throughput without increasing headcount.

Articles you might be interested in

Transition to Managed Services: when it pays off and how to start without a revolution

February 12, 2026
Minutes

Secure SDLC in practice for a software company: what to implement first?

February 12, 2026
Minutes

Modernization without stopping growth: how to move from a monolith to a scalable architecture?

February 12, 2026
Minutes