Your dashboards 
look amazing!

oh, wow!

(Such a shame,  you can’t trust them!)

(the trust issues, not looks)

Let's fix that

But first, let me introduce myself:

I'm Kristina, and I fix messy, mistrusted data, and then build foundations your team can actually use.

One other thing you should know about me — is that unfortunately, this is the only band-aid I can offer you:

Despite being good at getting to the core through the mess, I don’t enjoy creating more mess. That’s why I don’t use band-aids and duct tape to glue together the pieces that don’t go together otherwise.

Ready to do things right? I’m in!

Want a new quick fix to become a problem down the line? Have fun! (I’'ll pass, though)

(add a point for every statement you find relatable)

How to tell you need me?

Your dashboards look great.
And none of them match!

Your attribution model is complex.
Teams can have so much fun interpreting data in their favor.

You're planning for AI-implementation.
And your schemas don't even align — very brave.

You do A LOT of A/B* testing
*B as in “bias

Leadership demands a data-driven strategy
But the only thing that data is driving is — you, insane.

Scored more than 0? Let’s talk about how to get you back to square 1, and do it right this time. book a 30-minute call.

Schedule an intro call with kristina

But what do I actually do?

  1. We start with the holy triage:

Audit data pipeline &
re(create) documenation

Standardise event tracking

Reconciliate schemas across conflicting sources 

2. Move on to all things money

Model and diagnose cohort-based churn

Create forecast models no one will be able to blame for their poor decisions

Build undeniable multi-touch revenue attribution

Identify the true drop-off points of user journey

3. Support the further growth through good experiments

Even spray&pray can be scientific, if you design experiments right enough. So we’ll stick to tests that make sense for your org size, resources, and required speed of decisions.

Am I familiar with all of these words?

・estimand-first design ・well-defined exposure and assignment・correct randomization granularity ・ ex-ante power / MDE calculation ・CUPED・cluster-robust and heteroskedastic inference ・ spillovers, and cannibalization・sequential monitoring with predefined stopping boundaries ・ metric invariance and guardrail validation ・ analysis plans fixed prior to readout ・ reproducible pipelines and versioned outputs ・ decisions mapped to posterior / test outcomes

I am.

Do I also understand how the real-life business scenarios?

I do.

Am i trying to say i will tell you when a/b tests won’t work, and suggest alternative experiment designs that will still work and make sense?

Yup.

4. Then — we’ll talk about the thing

& go through the AI-READINESS checklist:

honest assessment: do you actually need ML/AI, or will you do better with proper BI? 

reality-check on real LLM use cases vs. hype chasing

getting infrastructure ready to support the real magic AI can deliver if you let it.

data quality & alignment audit, before you invest in models

5. Finally, we’ll throw some light on the true state of your:

data invisibility

team misalignment

inaccurate attribution

irrational decision making

unreliable dashboards

data illiteracy

non-scalable infrastructure

disorganised pipelines

overfitted predictions

underpowered tests

knowledge discontinuity

redefined success criteria

Seeing the real picture is rarely as pleasant as we imagine it to be. And still, as in life, so in business, facing the reality head-first — is the only path to growth.

Not convinced I’m as good as I say i am?

They were:

  • "She has this understanding of how, what, and why things should be done. And she will try to convey this information and insist that we do it correctly."

    — Olena, Kivra

  • "When she came in and showed us new methods of how to do things instead of A/B tests — that was cool.."

    — Marcus, Spotify

  • "For leading the data literacy project I needed somebody who knows how to explain things in a simple matter to tech & non-tech people. She was the one."

    — Antonio, Capital

  • "She had built a reputation within the company. People trusted her. They understood that she knew what she was talking about."

    Marco, Capital

peer pressure worked, I'm ready to book a call

Everyone wants self-serve analytics and data visibility.

But data visibility will only make things worse.

If you make the wrong data more visible.

Let’s make sure that won’t happen.

Ready to talk numbers?

Book a call