Experiments let you set up controlled tests on your marketing activity, compare performance across defined time windows, and record structured outcomes. Whether you’re testing a new bidding strategy, a creative variant, or a channel mix change, Experiments gives you a consistent framework for tracking what worked and why.Documentation Index
Fetch the complete documentation index at: https://docs.marcenta.ai/llms.txt
Use this file to discover all available pages before exploring further.
Navigate to Experiments
Select Experiments in the left sidebar. The table shows all experiments your organization has created, sorted by most recently updated.Create an experiment
Name your experiment
Enter a descriptive name that makes the hypothesis clear (e.g., “Broad match vs. exact match — Brand terms Q3”).
Set the primary metric
Choose the metric you’re optimizing for (e.g.,
conversions, ctr_pct, spend, clicks). This is the metric Marcenta uses to evaluate the experiment result.Define baseline and experiment date ranges
Set a baseline period (the control period before your change) and an experiment period (the period during which your change was active). Both ranges are required to evaluate results.
Add context (optional)
Fill in the channel, hypothesis, description, and any segment filters that apply to this experiment. These fields help you and your team understand the test at a glance.
Experiment statuses
An experiment moves through the following statuses during its lifecycle.| Status | Meaning |
|---|---|
| Draft | The experiment has been created but not yet started. You can edit all fields. |
| Running | The experiment is actively in progress. |
| Completed | The experiment period has ended and results are available. |
| Archived | The experiment has been retired and is no longer active. |
Evaluate results
Once both the baseline and experiment date ranges have passed, select Evaluate on the experiment detail page. Marcenta compares the primary metric value across both periods and calculates:- Baseline value — the metric total during the baseline period
- Experiment value — the metric total during the experiment period
- Delta — the absolute difference between the two periods
- Delta % — the percentage change from baseline to experiment
| Result status | Meaning |
|---|---|
| Helped | The metric moved in the desired direction |
| Hurt | The metric moved in the opposite direction |
| Neutral | No meaningful change was detected |
Set the direction field to
increase or decrease when creating or editing the experiment so Marcenta knows which way the metric should move for a positive outcome.Edit an experiment
Open any experiment and select Edit to update the name, channel, hypothesis, date ranges, status, or primary metric. You can edit any field that has not yet been locked by an evaluation.Tips for well-structured experiments
Keep baseline and experiment periods the same length
Keep baseline and experiment periods the same length
Using equal-length windows (e.g., 14 days vs. 14 days) removes calendar effects like weekends or seasonality from the comparison. Avoid comparing a short experiment window to a long baseline.
Document your hypothesis before running
Document your hypothesis before running
Writing down what you expect to happen—and why—makes it easier to learn from the result regardless of whether it helps or hurts. Use the hypothesis field before you start.
Use segment filters to isolate the test
Use segment filters to isolate the test
If your change only affected one campaign or channel, add a segment filter to narrow the evaluation to that scope. This prevents unrelated traffic from diluting the result.