Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.marcenta.ai/llms.txt

Use this file to discover all available pages before exploring further.

Experiments let you set up controlled tests on your marketing activity, compare performance across defined time windows, and record structured outcomes. Whether you’re testing a new bidding strategy, a creative variant, or a channel mix change, Experiments gives you a consistent framework for tracking what worked and why. Select Experiments in the left sidebar. The table shows all experiments your organization has created, sorted by most recently updated.

Create an experiment

1

Open the new experiment form

Select New Experiment from the Experiments page.
2

Name your experiment

Enter a descriptive name that makes the hypothesis clear (e.g., “Broad match vs. exact match — Brand terms Q3”).
3

Set the primary metric

Choose the metric you’re optimizing for (e.g., conversions, ctr_pct, spend, clicks). This is the metric Marcenta uses to evaluate the experiment result.
4

Define baseline and experiment date ranges

Set a baseline period (the control period before your change) and an experiment period (the period during which your change was active). Both ranges are required to evaluate results.
5

Add context (optional)

Fill in the channel, hypothesis, description, and any segment filters that apply to this experiment. These fields help you and your team understand the test at a glance.
6

Save the experiment

Save the experiment. It is created with a draft status and appears in your experiments list.

Experiment statuses

An experiment moves through the following statuses during its lifecycle.
StatusMeaning
DraftThe experiment has been created but not yet started. You can edit all fields.
RunningThe experiment is actively in progress.
CompletedThe experiment period has ended and results are available.
ArchivedThe experiment has been retired and is no longer active.
Update the status at any time from the experiment detail view to reflect where the test stands.

Evaluate results

Once both the baseline and experiment date ranges have passed, select Evaluate on the experiment detail page. Marcenta compares the primary metric value across both periods and calculates:
  • Baseline value — the metric total during the baseline period
  • Experiment value — the metric total during the experiment period
  • Delta — the absolute difference between the two periods
  • Delta % — the percentage change from baseline to experiment
Marcenta then applies the direction field to determine the result status:
Result statusMeaning
HelpedThe metric moved in the desired direction
HurtThe metric moved in the opposite direction
NeutralNo meaningful change was detected
Set the direction field to increase or decrease when creating or editing the experiment so Marcenta knows which way the metric should move for a positive outcome.

Edit an experiment

Open any experiment and select Edit to update the name, channel, hypothesis, date ranges, status, or primary metric. You can edit any field that has not yet been locked by an evaluation.

Tips for well-structured experiments

Using equal-length windows (e.g., 14 days vs. 14 days) removes calendar effects like weekends or seasonality from the comparison. Avoid comparing a short experiment window to a long baseline.
Writing down what you expect to happen—and why—makes it easier to learn from the result regardless of whether it helps or hurts. Use the hypothesis field before you start.
If your change only affected one campaign or channel, add a segment filter to narrow the evaluation to that scope. This prevents unrelated traffic from diluting the result.