Table of Contents
Two words about AI/LLM #
We have to remember that Large Language Models are based on a algorithm that is predicting next token (word) in a sequence of previously provided tokens. Nothing less and nothing more. This works suprisingly effective, but we have to remember that saying that AI "thinks", "reason" is just a rethorical figure.
Claude Code installation #
- Install NodeJS if you don't have it already (download page: https://nodejs.org/en/download)
- At the terminal, run
npm install -g @anthropic-ai/claude-code - Once installed, run
claudeat the terminal.
During the first time you run this command you will be prompted to authenticate, there are two options: either purchase a fixed-price access (20 bucks a month for basic option) or use API key and buy tokens on the fly. For now I am on the second option, untill I notice I am spending more than $20 a month, as this fixed-price access has some discount.
And you are all set, you can keep using Claude Code (CC for short).
Usage hints #
It's all about context #
Yeah, context, a complicated name for an information that we feed into the AI model. The more precise we are, the better. Garbage in, garbage out rule is a thing here, so be carefull.
The handy idea is to keep some generic information that will be needed for every prompt in the CLAUDE.md file. To generate one automatically, just write in the prompt /init.
CLAUDE.md file can be edited manually, or interactively from command prompt, by writing something starting with hash sign #, so # blah blah blah will add "blah blah blah" to CLAUDE.md.
Typically, we will have to add some files with the code, that we want CC to reason about or modify. The best practice is to add only those files that are really needed, otherwise we clutter the context which causes to unwanted effects:
- Model works worst - in case of LLM/AI and context/prompt less means more.
- You use more token, hence you have to pay more
So, coming back to adding files to the context, it is done by wrtining in the command prompt @ plus file path.
We can also add whole directories using /add-dir command.
Another neat thing is that you can paste into context not only text, by graphics too. So, if you have some Figma design/mockups, you can paste them, using Ctrl-V, and ask CC to create website from the provided design.
Managing context #
We already know that context is a key, it is as important as our "prompt", hence we need to manage it properly.
When CC starts "thinking" and preparing changes, this process can be interuptted at any time by pressing Esc key, this is handy, if you see that Claude Code is doing something in a wrong way. If you see some mistake that can be repeated in other conversations, you can add it to the "memory", that is CLAUDE.md by adding hint for CC using hash shortcut.
Example, in the Java word there are two popular testing libraries: JUnit and TestNG. If you prefer TestNG (which is IMHO much better), just say it:
# Use TestNG for tests
And from now on CC should honor you preferences, as they are in CC init file.
When we create long conversation with CC, the context might become cluttered with irrelevant information. Typically this happens if Claude Code create a code that fails and then iteratively it is fixing it. We ask to run the test, test fails, we ask to fix the bug and we repeat that process and clearly such back-and-forth discussion might be totally useless for the next task.
What we can do? We can rewind the conversation. This is done by pressing Esc twice. We are presented all sent messages and we can jump to any previous place and start from that spot.
In this way we can retain valuable part of context, that we have already paid for with out tokens (hence money), while removing useless part of the conversation history.
Another useful technique is the usage of /compact command. This compresses the context leaving only the most relevant information (this, obviously, might not work perfectly, as everything in AI).
At last, we can delete whole context using /clear command. This is important when we are switching the task to something mostly unrelated to previous one. Again, remember, more is less in case of Large Languahe Model, retaining uselsess context in the memory makes AI reasoning less effective.
Use "plan mode" #
This is a cool idea, CC will not do any chnages, it will outline a plan for changes, so it can get reviewed and altered before we are good to go.
Switching to "plan mode": press Shift-Tab twice (you will see "plan mode on" information).
Thinking modes #
You can turn it on by adding in your prompt one of the magic words:
- "Think" - Basic reasoning
- "Think more" - Extended reasoning
- "Think a lot" - Comprehensive reasoning
- "Think longer" - Extended time reasoning
- "Ultrathink" - Maximum reasoning capability
So if I write "do this and that fix, think a lot about the solution" switches on "Comprahensive reasoning".
Note:
- Yes, you can turn on thinking mode by accident.
- If you use higher thinking mode, you spend more tokens (hence money), so use it carefully.
Plan mode vs thinking mode? #
Those two serve different purpose and can be used also together.
Planning Mode is best for:
- Tasks requiring broad understanding of your codebase
- Multi-step implementations
- Changes that affect multiple files or components
Thinking Mode is best for something more focused on a single task, like:
- Complex business logic problems
- Debugging/Fixing hard bugs
- Developing non-trivial algorithms
MCP servers #
Model Context Protocol is a kind of great idea that opens the door for AI to reach out to the real world, that is communicate with various tools. The most popular MCP server out there is Github MCP server that let's you create/review pull requests and interact with the code repository in all kinds of ways.
Another useful server is Playwright server. Playwright is a pretty cool library for testing web applications that are runing real live. Playwright can open browser and move around the website like a real user. So we can ask CC to run website, review its content (for instance if it matches provided mockups or styling looks good) and fix the code.
Installing MCP server is simple:
1claude mcp add playwright npx @playwright/mcp@latest
Claude will ask for permission to run this MCP server everytime, so it might me handy to add permission to do this
Open the .claude/settings.local.json file and add playwright server:
1{
2 "permissions": {
3 "allow": ["mcp__playwright"],
4 "deny": []
5 }
6}
Yup, we need to use double underscore in the name: mcp__playwright.
[!IMPORTANT] Be carefull with letting tools to run, as you have probably noticed, Claude Code is allowed to modify files in your system, this is potentially dangerous. It may happen that some MCP server will return bogus response that will make Claude Code to do some unwanted action. This is called
prompt injectionand can be really dangerouse (think of stealing your private keys...)
There is a lot of MCP servers that can be used to work with the database, claude services integration, file system operations (yes, this is scary), etc.
Note also that each MCP server might use various tools (like bash), in order to enable this capability you need to define explicitely permission for each tool for each MCP server. This is mundane task but it should be, for the sake of the security.
Other stuff #
This was just a quick overview of the key Claude Code features. There is much more to it, like hooks, very deep integration with Github and Github actions, something called "Skills" which allows us to tell CC how to perform particular tasks.
Resources #
- https://anthropic.skilljar.com/claude-code-in-action/3
- https://docs.claude.com/en/docs/claude-code/overview