Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# MacOS
.DS_Store

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
Expand Down
76 changes: 76 additions & 0 deletions a11y/docs/a11y_page_titles.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# a11y_page_titles

Django management command that generates a markdown report comparing page titles across the OLH, Material, and Clean themes for a list of URLs.

## Purpose
To assit with finding areas of non-compliance with [WCAG 2.4.2 Page Titled](https://www.w3.org/WAI/WCAG22/Understanding/page-titled)
> Web pages have titles that describe topic or purpose.

"Describe" is subjective. This command cannot determine whether the title is descriptive. It is a helper command, to generate list of titles for a developer to review." A check is included as to whether all three themes have the same title, but this has no direct bearing on whether the title passes the requirement. Where the same title is expected across all three themes this provides a quick way to review the results.

## What it does

The command:

1. Loads a list of URLs (from a JSON file)
2. Fetches each URL once per theme (olh, material, clean) via the Django test client
3. Extracts the `<title>` from each response
4. Writes a markdown table to a file, with one row per URL and columns for each theme’s title and whether all three match

## How to use

### Defaults
- Input [`page_title_urls.json`](../playwright/tests/test_inputs/page_title_urls.json)
- Output [`results/markdown/page_titles.md`](../results/markdown/page_titles.md)

> **Note:** This command uses `page_title_urls.json` as its default, not `front_of_house.json` (which is the default for the Playwright tests). The page title check deliberately spans multiple journals (OLH, ANE, Glossa) so that titles which should include the journal name can be verified across different journals — a single-journal list would not catch errors where the journal name is missing or incorrect.

### Options

| Option | Default | Description |
|--------|---------|-------------|
| `--output PATH` | `a11y/results/page_titles.md` | Path to the output markdown file. The directory is created if it doesn’t exist. |
| `--urls-json PATH` | `a11y/test_inputs/localhost_urls.json` | Path to a JSON file containing an array of URL strings to check. |

### Examples

**Default behaviour** — use default URL list and write to default output:

```bash
python manage.py a11y_page_titles
```

**Both options** — custom URLs and custom output:

```bash
python manage.py a11y_page_titles --urls-json path/to/urls.json --output path/to/report.md
```

### URL list format

The `--urls-json` file must be valid JSON and contain a **single array of URL strings**. Order is preserved and determines the order of rows in the report. Duplicates are removed while keeping the first occurrence.

Example:

```json
[
"http://localhost:8000/",
"http://localhost:8000/contact",
"http://localhost:8000/olh/"
]
```

## Results

The results can be found at [`scripts/results/markdown/page_titles.md`](../../results/markdown/page_titles.md). There is only a markdown file, no json. The generated results table includes a final blank column for human review.

| URL | OLH Title | Material Title | Clean Title | All Identical | Human Review |
|-----|-----------|----------------|-------------|---------------|--------------|
| [http://localhost:8000/](http://localhost:8000/) | Open Library of Humanities | Open Library of Humanities | Open Library of Humanities | :white_check_mark: | |
| ...| ... | ... | ... | ... | |

This should be filled out by a **human** after the table has been generated, to show which titles satisfy the requirement of a descriptive title, and which do not.

Line 4 must also be added afterwards, to note which commit the test was run against, e.g.

> Test run on tag a11y-audit-1.9, 17 March 2026.
36 changes: 36 additions & 0 deletions a11y/docs/a11y_scripts.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Accessiblity Testing Helper Scripts

## Management Commands
These are in the same directory as the other managment commands.
1. [a11y_page_titles](docs/a11y_page_titles.md)


## Playwright Scripts
These are in the `playwright/` directory. Run from there with `npx playwright test`.

Results appear in `playwright/test-results/` and this directory is overwritten each time tests are run, so results need to be copyied out of there. For tracking, results are manually copied to `a11y/results/json`.

1. [axe](docs/axe.md) — `tests/axe-general.test.js` and `tests/axe-detail.test.js`
Runs axe-core against a list of URLs and checks for WCAG 2.2 Level A/AA violations.
2. [Target size](docs/target_size.md) — `tests/target_size.test.js`
Records the pixel dimensions of every focusable element against WCAG 2.2 AA (24 px) and AAA (44 px) thresholds. Outputs a markdown table and CSV to `test-results/`.


## URL input files

All URL lists live in `playwright/tests/test_inputs/`. Pass a different file to any test with the `URL_LIST` environment variable, or run a single URL with `A11Y_URL`.

| File | Default for | Description |
|------|------------|-------------|
| `front_of_house.json` | Accessibility test, Target size test | Broad front-of-house URL list covering the three main themes (clean, OLH, material) across a representative set of page types. Used as the default for most automated tests. |
| `page_title_urls.json` | `a11y_page_titles` management command | Multi-journal URL list (OLH, ANE, Glossa) used to verify page titles include the correct journal name. Intentionally spans multiple journals — a single-journal list would not catch errors where the journal name is missing or wrong. |
| `clarity.json` | — | URLs for the Clarity theme. Use with `URL_LIST` when testing Clarity specifically. |
| `clean.json` | — | URLs for the Clean theme. |
| `hourglass.json` | — | URLs for the Hourglass theme. |
| `material.json` | — | URLs for the Material theme. |
| `olh.json` | — | URLs for the OLH theme. |

## Results
When playwright runs, it deletes the contents of `playwright/test-results` and then puts the new results inside there. Any results we wish to keep and track should be copied to `a11y/results`.

The JSON is the source of truth, the markdown is a human readable summary.
135 changes: 135 additions & 0 deletions a11y/docs/axe.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
# Axe

Two test files are provided, both producing the same JSON output format via the shared teardown:

| | `axe-general.test.js` | `axe-detail.test.js` |
|---|---|---|
| **Scope** | All URLs in the URL list | Single URL (`A11Y_URL`) |
| **Rules** | Full WCAG 2.2 A/AA rule set | Single rule (`A11Y_RULE`) |
| **Purpose** | Overview — breadth across all pages | Investigation — depth on one rule |
| **Node detail** | None — violation counts per URL per browser only | Full axe node data including `any`/`all`/`none` check arrays |
| **Env vars required** | None | `A11Y_URL` and `A11Y_RULE` (errors if either missing) |

## How to use
:warning: If running against a dev install, then disable the debug toolbar before running tests or you will get errors from the toolbar itself.

### Overview run (`axe-general.test.js`)
This is for a general list of errors that we store in this repo, track over time and use to generate reports.

From the playwright directory. Note: use `axe-general.test`

```
npx playwright test axe-general.test --project=chromium
```

Run against multiple browsers in one pass by adding more `--project` flags, or omit `--project` entirely to run all configured browsers (chromium, firefox, webkit):

```
npx playwright test axe-general.test --project=chromium --project=firefox
npx playwright test axe-general.test
```

Additional browser configurations (e.g. mobile viewports) can be added to the `projects` array in `playwright.config.js`.

### Detail run (`axe-detail.test.js`)
This gathers more information about a specific rule and page, and is used when working on fixing errors. This is for information only and we do not store the results in this repo.

Requires `A11Y_URL` and `A11Y_RULE` to be set:

```
A11Y_URL=http://localhost:8000/ A11Y_RULE=color-contrast npx playwright test axe-detail --project=chromium
```

Run across all browsers:

```
A11Y_URL=http://localhost:8000/ A11Y_RULE=color-contrast npx playwright test axe-detail
```

Note: all tests should report as 'passed' in the terminal. Pass/fail during testing is on whether the test runs. If the tests fail, check the server is running!

## JSON output

Results are written to `playwright/test-results/results-{timestamp}.json` after each run. The file is a JSON array — one entry per axe rule that was run, sorted alphabetically by rule `id`.

The `{timestamp}` is minute-precision (`YYYY-MM-DDTHH-MM`) and is taken at the moment the run begins, so it is consistent across all entries in the file and can be used to identify the run.

All rules use the same structure: a `urls` array where every tested URL appears, each with a `browsers` object recording how many violations that browser found. `violations: 0` means the rule passed on that URL for that browser.

```json
{
"id": "aria-allowed-attr",
"help": "Elements must only use supported ARIA attributes",
"tags": ["cat.aria", "wcag2a", "wcag412", "..."],
"urls": [
{
"url": "http://localhost:8000/",
"browsers": {
"chromium": { "test_date": "2026-03-23T12:56", "violations": 0 },
"firefox": { "test_date": "2026-03-23T12:56", "violations": 0 }
}
}
]
}
```

A rule with violations looks the same — URLs where the rule passed still appear with `violations: 0`, and URLs with failures show the count:

```json
{
"id": "color-contrast",
"help": "Elements must have sufficient color contrast",
"tags": ["cat.color", "wcag2aa", "wcag143", "..."],
"urls": [
{
"url": "http://localhost:8000/",
"browsers": {
"chromium": { "test_date": "2026-03-23T12:56", "violations": 0 },
"firefox": { "test_date": "2026-03-23T12:56", "violations": 0 }
}
},
{
"url": "http://localhost:8000/articles/",
"browsers": {
"chromium": { "test_date": "2026-03-23T12:56", "violations": 3 },
"firefox": { "test_date": "2026-03-23T12:56", "violations": 1 }
}
}
]
}
```

To investigate which specific elements are failing, use `axe-detail.test.js`.

**Detail run (`axe-detail.test.js`)** — nodes additionally include the full axe check arrays:

- **`any`** — checks where at least one must pass; maps to "Fix any of the following" in the failure summary
- **`all`** — checks that must all pass
- **`none`** — conditions that must all be false

Each check entry contains `id`, `impact`, `message`, and `data` (e.g. exact contrast ratios, specific ARIA attributes). Use the detail run when you need this level of diagnostic information for a specific rule.

### Field reference

| Field | Description |
|-------|-------------|
| `id` | Axe rule identifier |
| `help` | Short description of what the rule checks |
| `tags` | WCAG and category tags (e.g. `wcag2a`, `best-practice`) |
| `urls` | All pages the rule was tested on, sorted by URL |
| `urls[].url` | The page URL |
| `urls[].browsers` | Per-browser results for this rule on this page |
| `urls[].browsers[browser].test_date` | Minute-precision timestamp of the run, taken at test start — consistent across the entire run |
| `urls[].browsers[browser].violations` | Number of failing elements found by that browser on that page; `0` if the rule passed |

### Tracking history over time

The `test_date` inside each `occurrences` entry records when that failure was observed. When running `axe-general`, the output file should be copied onto the appropriate history file after each run (`a11y/results/json/file.json`) — git will show exactly which nodes were added or removed between runs.

## Quality Assurance

These tests should be repeatable when run locally a few minutes apart with no other changes. When running the tests, you should run them twice a few minutes apart and check, for example using:
```bash
git diff | grep '^[+-]' | grep -v 'test_date'
```
If there are differences between runs, investigate whether any dynamic page content is affecting violation counts. It is important that the data is repeatable before results are relied upon.
86 changes: 86 additions & 0 deletions a11y/docs/target_size.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
# Target size test

Playwright test that records the pixel dimensions of every focusable element on each tested page and checks them against the WCAG 2.2 target size thresholds.

## Purpose

To assist with finding areas of non-compliance with [WCAG 2.5.8 Target Size (Minimum)](https://www.w3.org/WAI/WCAG22/Understanding/target-size-minimum) (Level AA) and [WCAG 2.5.5 Target Size (Enhanced)](https://www.w3.org/WAI/WCAG22/Understanding/target-size-enhanced) (Level AAA):

| Level | Minimum size |
|-------|-------------|
| AA | 24 × 24 px |
| AAA | 44 × 44 px |

The test collects data for all focusable elements — it does not fail on small targets. Use the output report to identify elements that fall below the thresholds.

**Text links** (`<a href>` with no `img` or `svg` child) are flagged separately in the report. WCAG 2.2 provides a height exception for inline text links because their height is determined by the surrounding line height rather than the author.

## Prerequisites

Install dependencies and browsers once from the `playwright/` directory:

```bash
npm install
npx playwright install
```

## URL list

Defaults to `tests/test_inputs/front_of_house.json`. This is the same default as the accessibility test — see [playwright_accessibility_testing.md](playwright_accessibility_testing.md) for details on the URL list format and environment variable overrides.

| Environment variable | Effect |
|---------------------|--------|
| `URL_LIST=/path/to/urls.json` | Use a different JSON file (array of URL strings) |
| `A11Y_URL=http://localhost:8000/olh/` | Run against a single URL only |

## How to run

From the `playwright/` directory:

```bash
npx playwright test target_size --project=chromium
```

### Run against a single URL

```bash
A11Y_URL=http://localhost:8000/olh/ npx playwright test target_size --project=chromium
```

### Use a different URL list

```bash
URL_LIST=/path/to/my-urls.json npx playwright test target_size --project=chromium
```

### Run in headed mode

```bash
npx playwright test target_size --project=chromium --headed
```

## Output

After the run, two files are written to the test's output directory inside `test-results/`:

- **`target-size-report.md`** — a markdown table with one row per focusable element, showing URL, tag, accessible name, width, height, whether it is a text link, and whether it meets AA and AAA thresholds.
- **`target-size-report.csv`** — the same data in CSV format for spreadsheet analysis.

The report header also shows the total number of URLs tested and total focusable elements found.

## Reading the results

Each row in the report shows:

| Column | Description |
|--------|-------------|
| URL | Page the element was found on |
| # | Element index on that page |
| Tag | HTML tag (`a`, `button`, `input`, etc.) |
| Name / Label | Accessible name (aria-label, title, placeholder, or text content) |
| Width / Height (px) | Rendered size |
| Text link | ✓ if this is an inline text `<a href>` with no img/svg |
| ≥24×24 (AA) | ✓ meets WCAG 2.2 Level AA minimum |
| ≥44×44 (AAA) | ✓ meets WCAG 2.2 Level AAA enhanced |

Elements where **Text link** is ✓ and the AA column is ✗ may still pass WCAG 2.2 due to the height exception — review these manually.
8 changes: 8 additions & 0 deletions a11y/playwright/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@

# Playwright
node_modules/
/test-results/
/playwright-report/
/blob-report/
/playwright/.cache/
/playwright/.auth/
Loading