Skip to main content

best practices

This guide outlines recommended practices for working with the Instinct API to ensure optimal performance, security, and maintainability of your applications.

general api usage

handling requests and responses

  1. Always check response success status

    • Verify the success field in API responses
    • Handle errors properly with appropriate user feedback
  2. Implement proper error handling

    • Use error codes for programmatic handling
    • Implement retry logic with exponential backoff for transient errors
    • See the Error Handling guide for details
  3. Optimize request frequency

    • Batch operations when possible
    • Implement caching for frequently accessed data
    • Avoid polling endpoints too frequently
  4. Use appropriate HTTP methods

    • GET for retrieving data
    • POST for creating resources
    • PUT for updating resources
    • DELETE for removing resources

authentication and security

  1. Protect API keys

    • Never expose API keys in client-side code
    • Use environment variables or secure storage
    • Implement key rotation in production environments
  2. Validate input data

    • Check user inputs before sending to the API
    • Sanitize data to prevent injection attacks
    • Validate against expected schemas
  3. Implement proper access controls

    • Restrict access to sensitive operations
    • Audit access to critical endpoints

data processing best practices

stream design

  1. Keep streams focused

    • Design each stream with a clear purpose
    • Split complex workflows into multiple streams when appropriate
    • Use descriptive IDs for streams, nodes, and pipes
  2. Optimize node configurations

    • Only process data that's needed
    • Configure appropriate buffer sizes
    • Use efficient data formats between nodes
  3. Ensure DAG integrity

    • Avoid creating cycles in the graph
    • Validate the pipeline structure before execution
    • Use proper error handling within nodes

performance considerations

  1. Monitor resource usage

    • Check memory and CPU utilization
    • Scale buffer sizes based on data rates
    • Consider the hardware capabilities when designing complex pipelines
  2. Handle backpressure

    • Implement flow control between nodes
    • Configure appropriate queuing strategies
    • Use throttling when necessary
  3. Testing

    • Test with representative data volumes
    • Validate node behavior with edge cases
    • Ensure clean shutdown and restart capabilities

hardware interaction best practices

device management

  1. Check device status regularly

    • Verify hardware connectivity before operations
    • Monitor battery status for wireless devices
    • Handle disconnections gracefully
  2. Optimize EEG configurations

    • Only stream channels that are needed
    • Select appropriate sampling rates for the use case
    • Apply filters based on the signal of interest
  3. Perform impedance checks

    • Always check impedances before important data collection
    • Establish quality thresholds for your application
    • Document electrode conditions

data management

  1. Structured metadata

    • Use consistent naming conventions for subjects and sessions
    • Include detailed metadata with your data
    • Document experimental conditions
  2. Data security

    • Implement proper access controls for sensitive EEG data
    • Consider data encryption for storage
    • Follow relevant data protection regulations
  3. Backup strategies

    • Implement regular backup procedures for important data
    • Consider redundant storage for critical data
    • Validate backup integrity

development workflow

testing

  1. Create a testing strategy

    • Unit tests for critical components
    • Integration tests for API interactions
    • End-to-end tests for complete workflows
  2. Use staged environments

    • Development for initial work
    • Staging for integration testing
    • Production for final deployment
  3. Test with realistic data

    • Create representative test datasets
    • Simulate real-world usage patterns
    • Test with various data rates and volumes

deployment

  1. Use configuration management

    • Externalize configuration from code
    • Use environment variables for deployment-specific settings
    • Document configuration options
  2. Monitor API usage

    • Track API call patterns and frequencies
    • Set up alerts for unusual behavior
    • Monitor for performance bottlenecks
  3. Version control integration

    • Manage API client code in version control
    • Document API versions used
    • Test upgrades before deployment

documentation

  1. Document your integration

    • Create internal documentation for your API usage
    • Document custom workflows and configurations
    • Keep records of any issues and their resolutions
  2. Maintain configuration documentation

    • Document stream configurations
    • Record EEG acquisition parameters
    • Keep logs of hardware configurations

advanced usage

custom nodes and extensibility

  1. Follow node development guidelines

    • Implement standard interfaces
    • Handle errors gracefully
    • Document input/output formats
  2. Resource cleanup

    • Ensure proper resource release on shutdown
    • Handle termination signals appropriately
    • Implement graceful shutdown procedures

integration with other systems

  1. Use standard data formats

    • Prefer widely-used formats for data exchange
    • Document data schemas
    • Implement converters when necessary
  2. Modular design

    • Design integrations to be modular and reusable
    • Implement adapters for different systems
    • Use abstraction layers to simplify future changes

functional best practices

data acquisition

  • Always check electrode impedances before starting data collection
  • Minimize electrical interference in the recording environment
  • Ensure the subject is comfortable to reduce movement artifacts
  • Use appropriate filtering for your specific application
  • Start with validated pipeline configurations before customizing

data processing

  • Start with simple pipelines and add complexity incrementally
  • Test each processing step individually before combining
  • Monitor CPU and memory usage to avoid overloading the system
  • Use appropriate buffer sizes for your sampling rate
  • Document your processing pipelines for reproducibility

security

  • Regularly update the headset firmware
  • Use secure networks for wireless connections
  • Review and audit API access
  • Follow data privacy regulations for subject data
  • Implement user authentication where needed

system performance

  • Close unused streams when not needed
  • Choose appropriate sampling rates for your application
  • Disable channels that aren't necessary for your experiment
  • Consider data reduction techniques for long-term monitoring
  • Monitor system health metrics during extended operations

next steps