Skip to content

๐Ÿ”ง Test system fix plan

๐ŸŽฏ Issues found

1. โŒ Wrong log directory

Problem: Logs should be under results/logs/verilator/TEST, not build/log/verilator/TEST

Fix: - Makefile.verilator: change LOG_DIR variable - test_manager.py: fix log paths - validation_runner.py: use results/


2. โŒ Validation not invoked automatically

Problem: test-run only runs RTL; no Spike validation

Fix:

# Inside test_manager.py
def run_test_with_validation(test_name):
    # 1. RTL simulation
    rtl_ok = run_rtl_simulation(test_name)

    if not rtl_ok:
        return "SIMULATION_CRASHED"

    # 2. Validation (optional โ€” per suite)
    if should_validate(test_name):
        validation = run_validation(test_name)
        return "VALIDATED_PASS" if validation.passed else "VALIDATED_FAIL"
    else:
        return "SIMULATION_COMPLETED"


3. โŒ Misleading messages

Problem: - "โœ“ Test PASSED" โ†’ Wrong! Simulation merely finished - "โœ“ Simulation successful" โ†’ OK when there is no validation

Fix:

# verilator_runner.py
if exit_code == 0:
    print("โœ“ SIMULATION COMPLETED")  # Not "test passed" โ€” simulation

# test_manager.py
if simulation_ok and validation_ok:
    print("โœ… TEST PASSED - VALIDATED")
elif simulation_ok:
    print("โœ“ SIMULATION COMPLETED (validation skipped)")
else:
    print("โŒ SIMULATION CRASHED")

4. โŒ Simulation duration shows 0.0

Problem: Wrong timing calculation in verilator_runner.py

Fix:

start_time = datetime.now()
# ... simulation ...
end_time = datetime.now()
elapsed = (end_time - start_time).total_seconds()
print(f"Duration: {elapsed:.2f} s")  # .2f format


5. โŒ Debug log directories empty

Problem: Debug logger not writing files

Fix: - debug_logger.py: check file write errors - Add permission checks - Ensure directory creation is reliable


6. โŒ Verilator logs on screen

Problem: Verilator output floods the terminal; hard to see summaries

Fix:

# verilator_runner.py
LOG_FILE = log_dir / "verilator_full.log"

with open(LOG_FILE, 'w') as logf:
    process = subprocess.run(
        cmd,
        stdout=logf,
        stderr=subprocess.STDOUT
    )

# Show summary only
print("โœ“ Simulation completed - Full log: {LOG_FILE}")


7. โŒ No HTML report

Problem: HTML report not generated automatically

Fix:

# test_manager.py
if validation.passed:
    generate_html_report(test_name, validation)


8. โŒ No parameters in debug logs

Problem: Unknown which commands ran with which args

Fix:

# debug_logger.py โ€” inside step
step.command("verilator --cc ...")
step.arguments(["--test", "rv32ui-p-add", "--max-cycles", "100000"])

# In JSON:
{
  "execution_flow": [
    {
      "command": "verilator --cc ...",
      "args": ["--test", "rv32ui-p-add"],
      "env": {"MAX_CYCLES": "100000"}
    }
  ]
}


๐Ÿ”จ Implementation order

Phase 1: Log directory layout (critical)

  1. Makefile.verilator: LOG_DIR โ†’ results/logs/verilator/
  2. test_manager.py: path updates
  3. validation_runner.py: path updates

Phase 2: Message fixes (easy)

  1. verilator_runner.py: "Test PASSED" โ†’ "SIMULATION COMPLETED"
  2. test_manager.py: final decision logic

Phase 3: Validation integration (medium)

  1. test_manager.py: run_validation() function
  2. test_registry.json: validation_enabled flag
  3. Automatic invocation

Phase 4: Debug improvements (easy)

  1. Timing fix
  2. Command logging
  3. Output redirection

Phase 5: HTML report (optional)

  1. Integrate html_report_generator.py

๐Ÿ“ Priority fixes

PRIORITY 1: Log directory

Impact: High โ€” files were landing in the wrong place

# Makefile.verilator
-LOG_DIR := $(BUILD_DIR)/log/verilator/$(TEST_NAME)
+LOG_DIR := $(RESULTS_DIR)/logs/verilator/$(TEST_NAME)

PRIORITY 2: Validation invocation

Impact: Critical โ€” test correctness was unknown

# test_manager.py โ€” inside cmd_run()
results = runner.run_tests_with_validation(tests_to_run, **kwargs)

PRIORITY 3: Messages

Impact: Medium โ€” user confusion

print("โœ“ SIMULATION COMPLETED")  # not "Test PASSED"

๐Ÿงช Test plan

After each fix:

# Run test
make -f Makefile.verilator test-run TEST_NAME=rv32ui-p-add

# Verify
ls -la results/logs/verilator/rv32ui-p-add/
cat results/logs/verilator/rv32ui-p-add/diff.log


โœ… Success criteria

The fix is successful if: - โœ… Logs under results/logs/SIMULATOR/TEST/ - โœ… Validation runs automatically - โœ… Correct messages: "SIMULATION COMPLETED" vs "TEST PASSED (VALIDATED)" - โœ… Duration displayed correctly - โœ… Debug logs populated and informative - โœ… Verilator output in file, summary on screen - โœ… HTML report automatic - โœ… No permission errors