🧬 Copilot-LD

An intelligent agent leveraging GitHub Copilot and Linked Data

Development Guide

This guide focuses on local development, testing, and validation workflows for Copilot-LD. For initial setup and configuration, see the Configuration Guide. For knowledge base processing, see the Processing Guide.

Prerequisites

Complete the initial setup steps:

Development Workflow

By default all services run in development mode with debug logging enabled.

# Launch all services in development mode
npm run dev:start

# Stop development environment
npm run dev:stop

# Restart development environment
npm run dev:restart

Testing and Validation

Run various testing and validation checks:

npm test
npm run test:watch
npm run perf
npm run check
npm run check:fix

Individual quality checks:

npm run lint
npm run format
npm run spellcheck

These run ESLint code analysis, Prettier formatting check, and spell check documentation respectively.

Security Auditing

Check for security vulnerabilities:

npm run security

Development Scripts

The scripts/ directory contains utilities for development and testing:

Interactive Testing

# Interactive chat with the system
npm run chat
> What are Docker security best practices?
> How do I implement rolling updates in Kubernetes?

# Semantic search testing
npm run search
> docker security
> kubernetes deployment

# Graph query testing
npm run query
> ? rdf:type schema:Person
> person:john ? ?

# Trace visualization with Mermaid diagrams
npm run cli:visualize
> [?kind==`2`]  # Show all SERVER spans
> [?contains(name, 'llm')]  # Show LLM-related operations

Evaluation Testing

The evaluation system assesses agent response quality using LLM-as-a-judge methodology with a two-step workflow:

# Step 1: Run evaluation tests (stores results in EvaluationIndex)
npm run eval

# Run with custom concurrency and iterations
npm run eval -- --concurrency 10 --iterations 3

# Step 2: Generate reports from stored results
npm run eval:report

Evaluation System:

Test cases are defined in config/eval.yml based on the BioNova pharmaceutical demo scenarios.

Reports are written to:

Scripted Testing

# Pipe input for automated testing
echo "Tell me about container security" | npm run cli:chat

# Search with CLI flags for precise testing
echo "docker" | npm run cli:search -- --limit 10 --threshold 0.25

# Test descriptor-based search
echo "kubernetes deployment" | npm run cli:search -- --index descriptor --limit 5

System Validation Scripts

# Test vector embeddings and similarity
node scripts/cosine.js "first text" "second text"

# Validate model configurations
npx env-cmd -- node scripts/models.js

# Test embedding generation
echo "sample text" | npx env-cmd -- node scripts/embed.js

Development Utilities

# Generate SSL certificates for HTTPS development
node scripts/cert.js

# Generate service authentication secrets
node scripts/secret.js

# GitHub token setup and validation
npx env-cmd -- node scripts/token.js

Code Generation Workflow

For details about Protocol Buffer compilation and type generation, see the Code Generation section in the Architecture Guide.

Generate all code from proto definitions:

mkdir generated
npm run codegen

Generate specific components:

npm run codegen:type
npm run codegen:client
npm run codegen:service

These generate type definitions only, client stubs only, and service bases only respectively.

Web Interface Testing

Access the development interfaces:

Trace Analysis

All system operations are automatically traced and stored in data/traces/ as daily JSONL files.

Interactive Trace Visualization

The visualize CLI tool generates Mermaid sequence diagrams with JMESPath query support:

# Launch interactive REPL
npm run cli:visualize

# Query traces with JMESPath expressions
> [?kind==`2`]  # All SERVER spans
> [?contains(name, 'llm')]  # LLM operations
> [?attributes."resource_id"=='common.Conversation.abc123']  # Specific conversation

# Filter by trace or resource ID
--trace f6a4a4d0d3e91
--resource common.Conversation.abc123

Visualization output shows:

Viewing Raw Traces

# View today's traces
cat data/traces/$(date +%Y-%m-%d).jsonl | jq

# Count spans by operation type
cat data/traces/*.jsonl | jq -r '.name' | sort | uniq -c

# Count spans by service
cat data/traces/*.jsonl | jq -r '.attributes["service.name"]' | sort | uniq -c

# Find all spans for a specific trace ID
cat data/traces/*.jsonl | jq 'select(.trace_id == "TRACE_ID_HERE")'

# Find error spans
cat data/traces/*.jsonl | jq 'select(.status.message != "")'

Analyzing Request Flows

# Find traces with most spans (complex operations)
cat data/traces/*.jsonl | jq -r '.trace_id' | sort | uniq -c | sort -rn | head

# Analyze trace depth (parent-child nesting)
cat data/traces/*.jsonl | \
  jq -r '{trace_id, name, parent: .parent_span_id}' | \
  jq -s 'group_by(.trace_id) | map({trace: .[0].trace_id, depth: length})'

Common Trace Patterns

Agent Request Trace:

Tool Execution Chain:

Vector Search:

For detailed tracing implementation, see the Reference Guide.