Skip to content

Commit 1996169

Browse files
committed
Adds comprehensive security policy documentation
Establishes security guidelines for CUDA kernel usage, dependency management, and memory safety considerations. Provides clear vulnerability reporting procedures with contact information and response timelines. Includes security best practices for environment isolation, input validation, and resource monitoring to help users safely integrate the library.
1 parent e507584 commit 1996169

File tree

1 file changed

+112
-0
lines changed

1 file changed

+112
-0
lines changed

SECURITY.md

Lines changed: 112 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,112 @@
1+
# Security Policy
2+
3+
## Supported Versions
4+
5+
We actively maintain and provide security updates for the following versions:
6+
7+
| Version | Supported |
8+
| ------- | ------------------ |
9+
| Latest | :white_check_mark: |
10+
| < Latest| :x: |
11+
12+
## Security Considerations
13+
14+
### CUDA Code Execution
15+
16+
Flash Dynamic Mask Attention includes CUDA kernels and C++ extensions that execute on your GPU. When using this library:
17+
18+
- Only install from trusted sources (official PyPI releases or verified builds)
19+
- Be cautious when building from source with modifications
20+
- Verify checksums when downloading pre-built binaries
21+
22+
### Dependencies
23+
24+
This library depends on:
25+
- PyTorch (with CUDA support)
26+
- NVIDIA CUTLASS library
27+
- Standard Python scientific computing libraries
28+
29+
We recommend keeping all dependencies up to date and using virtual environments for isolation.
30+
31+
### Memory Safety
32+
33+
Our CUDA kernels are designed with memory safety in mind:
34+
- Bounds checking is implemented where performance allows
35+
- Memory allocation patterns are tested across different input sizes
36+
- We use established patterns from Flash Attention and CUTLASS
37+
38+
However, as with any low-level CUDA code:
39+
- Very large input tensors may cause out-of-memory errors
40+
- Invalid input shapes may cause undefined behavior
41+
- Custom modifications to kernel code should be thoroughly tested
42+
43+
## Reporting a Vulnerability
44+
45+
If you discover a security vulnerability, please report it responsibly:
46+
47+
**For security issues:**
48+
- Email: losercheems@gmail.com
49+
- Subject: [SECURITY] Flash-DMA Vulnerability Report
50+
- Include: Detailed description, reproduction steps, and potential impact
51+
52+
**For general bugs:**
53+
- Use our [GitHub Issues](https://github.com/SmallDoges/flash-dmattn/issues)
54+
- Follow our [contributing guidelines](CONTRIBUTING.md)
55+
56+
## Response Timeline
57+
58+
- **Acknowledgment**: Within 48 hours
59+
- **Initial Assessment**: Within 1 week
60+
- **Resolution**: Depends on severity and complexity
61+
62+
Critical security issues will be prioritized and may result in emergency releases.
63+
64+
## Security Best Practices
65+
66+
When using Flash Dynamic Mask Attention:
67+
68+
1. **Environment Isolation**
69+
```bash
70+
# Use virtual environments
71+
python -m venv flash_dma_env
72+
source flash_dma_env/bin/activate # Linux/Mac
73+
# or
74+
flash_dma_env\Scripts\activate # Windows
75+
```
76+
77+
2. **Dependency Management**
78+
```bash
79+
# Keep dependencies updated
80+
pip install --upgrade torch flash-dmattn
81+
```
82+
83+
3. **Input Validation**
84+
```python
85+
# Validate tensor shapes and dtypes before processing
86+
assert query.dtype in [torch.float16, torch.bfloat16, torch.float32]
87+
assert query.shape == key.shape == value.shape
88+
```
89+
90+
4. **Resource Monitoring**
91+
```python
92+
# Monitor GPU memory usage
93+
import torch
94+
print(f"GPU Memory: {torch.cuda.memory_allocated() / 1e9:.2f} GB")
95+
```
96+
97+
## Disclosure Policy
98+
99+
- Confirmed vulnerabilities will be disclosed responsibly
100+
- Security fixes will be released as soon as safely possible
101+
- CVE numbers will be requested for significant vulnerabilities
102+
- Credit will be given to security researchers who report issues responsibly
103+
104+
## Contact
105+
106+
For security-related questions or concerns:
107+
- Primary: losercheems@gmail.com
108+
- Project maintainers: See [AUTHORS](AUTHORS) file
109+
110+
For general support:
111+
- GitHub Issues: https://github.com/SmallDoges/flash-dmattn/issues
112+
- Documentation: https://github.com/SmallDoges/flash-dmattn/docs/

0 commit comments

Comments
 (0)