Welcome to VSCode-Local-Copilot! This tool brings enhanced coding support right to your fingertips in Visual Studio Code. With it, you'll enjoy GitHub Copilot-like capabilities using local Ollama models. Let's get you started!
To download the latest version of VSCode-Local-Copilot, visit this page:
On the Releases page, follow these steps to get your extension:
- Find the latest release at the top.
- Click on the release title to see the details.
- Locate the version suited for your system.
- Click the correct file name to download it.
- OS: Windows 10 or later, macOS Mojave or later, or a recent version of Linux.
- VS Code version: 1.50 or newer.
- Memory: At least 4 GB of RAM.
- Storage: 200 MB of free disk space.
Once the download is complete, follow these steps to install and run VSCode-Local-Copilot:
-
Locate the downloaded file:
- For Windows, it will be in your Downloads folder as a
.zipfile. - For macOS, it may be in the Downloads folder with the
.zipformat. - For Linux, find the file in your Downloads directory.
- For Windows, it will be in your Downloads folder as a
-
Extract the files:
- On Windows, right-click the
.zipfile and select "Extract All". - On macOS, double-click the
.zipfile. - On Linux, right-click and select "Extract Here".
- On Windows, right-click the
-
Install the extension in Visual Studio Code:
- Open Visual Studio Code.
- Click on the Extensions icon on the sidebar (or press
Ctrl+Shift+X). - Click on the three dots at the top-right corner and choose "Install from VSIX...".
- Browse to the extracted folder and select the VSIX file.
- Hit "Open" to begin installation.
-
Enable local models:
- After installation, open the command palette by pressing
Ctrl+Shift+P. - Type "Local Copilot" and select "Enable Local Models".
- After installation, open the command palette by pressing
-
Start coding:
- Create a new file or open an existing one.
- Start typing, and you will see suggestions pop up!
VSCode-Local-Copilot leverages the powerful Ollama models running locally on your machine. This means faster response times for code suggestions without any need for internet access. Enjoy a seamless development experience with smart completions based on your coding patterns and styles.
- Local Model Usage: No internet required for data processing.
- Smart Suggestions: Receive code completions tailored to your work.
- Multi-language Support: Works well with various programming languages, enhancing your workflow.
- Easy Integration: Simple setup process to get started right away.
- Regular Updates: Stay tuned for new features and improvements.
If you encounter issues, please consider the following troubleshooting tips:
- Extension Not Visible: Ensure you have installed the extension correctly via the command palette.
- Slow Performance: Check system resources to ensure adequate RAM is available.
- Suggestions Not Appearing: Restart Visual Studio Code, and make sure local model features are enabled.
For support, please open an issue on the GitHub repository, or refer to the FAQ section. Your feedback and questions are welcome!
If you'd like to contribute to VSCode-Local-Copilot, please follow these guidelines:
- Fork the repository.
- Create a new branch for your feature or fix.
- Make your changes and commit them.
- Push the branch back to your fork.
- Submit a pull request detailing your changes.
VSCode-Local-Copilot is open-source software licensed under the MIT License. You may use, modify, and distribute the code as you wish.
- Thanks to the creators of the Ollama model for their exceptional work in machine learning and AI.
- A big thank you to the Visual Studio Code community for their continued support and contributions.
Remember, you can always download VSCode-Local-Copilot from this page: