Skip to content

Configuration

Configure Tameshi behavior for CLI and VSCode integration.

VSCode Configuration

Configure Tameshi in VSCode Settings (Cmd+, or Ctrl+,) or .vscode/settings.json.

Essential Settings

json
{
  "tameshi.server.path": "/path/to/tameshi-lsp/target/release/tameshi-lsp",
  "tameshi.scan.onSave": "file",
  "tameshi.scan.include": ["**/*.sol"],
  "tameshi.scan.exclude": ["**/node_modules/**", "**/test/**"],
  "tameshi.llm.enabled": false
}

Server Configuration

SettingTypeDefaultDescription
tameshi.server.pathstring-Path to Tameshi LSP server binary
tameshi.server.argsstring[][]Additional LSP server arguments
tameshi.server.envobject{}Environment variables for server

Scan Behavior

SettingTypeDefaultDescription
tameshi.scan.onSavestring"file"When to scan: "none", "file", "workspace"
tameshi.scan.onSave.debouncenumber1000Debounce delay in milliseconds
tameshi.scan.includestring[]["**/*.sol"]Files to scan (glob patterns)
tameshi.scan.excludestring[]["**/node_modules/**"]Files to exclude
tameshi.scan.onIdle.enabledbooleantrueScan when editor is idle
tameshi.scan.onIdle.idleSecondsnumber30Idle threshold in seconds

LLM Configuration

SettingTypeDefaultDescription
tameshi.llm.enabledbooleanfalseEnable LLM scanning
tameshi.llm.providerstring"openai"Provider: "openai" (only OpenAI supported currently)
tameshi.llm.apiKeystring-API key (use ${env:VAR_NAME} for env vars)
tameshi.llm.modelstring"gpt-4"Model name
tameshi.llm.temperaturenumber0.2Response creativity (0.0-2.0)
tameshi.llm.maxTokensnumber4000Maximum response tokens
tameshi.llm.confidenceThresholdnumber0.5Minimum confidence (0.0-1.0)
tameshi.llm.timeoutSecondsnumber120Request timeout
tameshi.llm.useIrScanningbooleanfalseSend ThalIR instead of source

Findings Display

SettingTypeDefaultDescription
tameshi.findings.view.groupBystring"severity"Group by: "severity", "file", "rule"
tameshi.findings.minCorrelationScorenumber0.5Minimum correlation score (0.0-1.0)
tameshi.findings.mergeModestring"merged"Display: "raw" or "merged"

Editor Features

SettingTypeDefaultDescription
tameshi.editor.hovers.enabledbooleantrueShow hover information
tameshi.editor.gutterIcons.enabledbooleantrueShow gutter icons
tameshi.editor.codeLens.enabledbooleantrueShow CodeLens summaries
tameshi.editor.statusBar.enabledbooleantrueShow status bar item

CLI Configuration

Environment Variables

Set defaults via environment variables:

bash
export TAMESHI_MIN_SEVERITY=medium
export TAMESHI_OUTPUT_FORMAT=json
export OPENAI_API_KEY=sk-...

LLM Configuration File

Create .tameshi/llm-config.yaml in your project:

yaml
provider:
  type: openai
  model: gpt-4
  api_key: ${OPENAI_API_KEY}

enabled_scanners:
  - reentrancy
  - access_control
  - integer_overflow

global:
  default_temperature: 0.2
  default_max_tokens: 4000
  retry_attempts: 3
  timeout_seconds: 60
  use_ir_scanning: false

Example Configurations

Development Workflow

Fast feedback with on-save file scanning:

json
{
  "tameshi.scan.onSave": "file",
  "tameshi.scan.onSave.debounce": 500,
  "tameshi.scan.include": ["contracts/**/*.sol"],
  "tameshi.scan.exclude": [
    "**/node_modules/**",
    "**/test/**",
    "**/mocks/**"
  ],
  "tameshi.findings.view.groupBy": "severity",
  "tameshi.editor.codeLens.enabled": true,
  "tameshi.llm.enabled": false
}

Security Audit

Comprehensive scanning with LLM:

json
{
  "tameshi.scan.onSave": "none",
  "tameshi.llm.enabled": true,
  "tameshi.llm.provider": "openai",
  "tameshi.llm.model": "gpt-4",
  "tameshi.llm.apiKey": "${env:OPENAI_API_KEY}",
  "tameshi.llm.useIrScanning": true,
  "tameshi.findings.minCorrelationScore": 0.7,
  "tameshi.findings.mergeMode": "merged"
}

Team Workspace

Shared settings for consistency:

json
{
  "tameshi.server.path": "${workspaceFolder}/../tameshi-lsp/target/release/tameshi-lsp",
  "tameshi.scan.onSave": "file",
  "tameshi.scan.include": ["src/**/*.sol"],
  "tameshi.scan.exclude": [
    "**/node_modules/**",
    "**/.deps/**",
    "**/artifacts/**"
  ],
  "tameshi.findings.view.groupBy": "file",
  "tameshi.editor.hovers.enabled": true,
  "tameshi.editor.gutterIcons.enabled": true
}

CI/CD Pipeline

Bash script with environment variables:

bash
#!/bin/bash

export TAMESHI_MIN_SEVERITY=critical
export TAMESHI_OUTPUT_FORMAT=json

tameshi scan run -i src/ --format json > findings.json

# Fail if critical findings exist
if grep -q '"severity":"Critical"' findings.json; then
  echo "Critical vulnerabilities found"
  exit 1
fi

Workspace vs User Settings

User Settings (global, all projects):

  • Server path
  • Editor preferences
  • UI customization

Workspace Settings (project-specific, committed to git):

  • Scan patterns
  • Excluded directories
  • LLM configuration
  • Minimum thresholds

Commit .vscode/settings.json for team consistency.

Smart AI Rescan Configuration

json
{
  "tameshi.scan.ai.smartRescan": "file",
  "tameshi.scan.ai.smartRescan.debounce": 3000,
  "tameshi.scan.ai.smartRescan.contextLines": 2,
  "tameshi.scan.ai.smartRescan.batchThreshold": 5,
  "tameshi.scan.ai.smartRescan.minSeverity": "medium"
}

Options:

  • off - Disabled
  • file - Rescan individual files as they change
  • batch - Batch multiple changed files for efficiency

API Key Management

bash
# In ~/.bashrc or ~/.zshrc
export OPENAI_API_KEY=sk-...

Then reference in settings:

json
{
  "tameshi.llm.apiKey": "${env:OPENAI_API_KEY}"
}

Avoid committing API keys to git:

json
{
  "tameshi.llm.apiKey": "sk-..."
}

Add to .gitignore:

.vscode/settings.json

Configuration File

Store in .tameshi/llm-config.yaml (add to .gitignore):

yaml
provider:
  api_key: ${OPENAI_API_KEY}

Next Steps

Released under the MIT License.