My thoughts are similar to titotal's above: I found it hard to get through. There are a lot of stock Claude/LLM phrases, such as the "It's not this. It's this" and the usage of "Reality check", the use of slightly too uncommon synonyms, and the slightly too fancy vocabulary.
I think there's value in LLM feedback but when it rewrites whole sections it usually starts to feel annoying to me. I don't know if you have a "system prompt" for your Claude, but prompting it to preserve your voice much more, or just give you a specific list of improvements to implement might work. It could also be worth giving Claude some other things you've written as context for "your voice" and to give it strict instructions to avoid certain ways of writing.
Some of the things I did like from the Claude version because they made it more skimmable and easier to figure out what was happening:
The weeks in the section headers
Key points bolded
The section recapping what you learned about career transitions
Relatedly, I think having a TL; DR at the top of posts is generally helpful
I struggle with the same perfectionism, but reading your original post, it does not seem net-negative to me. It works very well for the personal reflection blog post format, and is much more enjoyable to read. If you were applying for writing/blogging positions it would probably be too unpolished, but even then they wouldn't care if you had older material that was less polished. If you're concerned about it you could probably mostly mitigate it by adding a disclaimer at the top that you wrote it in a limited amount of time.
You also can't really make a mistake in this kind of post because it is a personal reflection. It's about your experience, rather than e.g. you presenting research results or carefully arguing for an opinion which would be much higher stakes and would require more carefulness. You can't get your own experience wrong.
I think this post is very valuable as a resource for other people considering going to a future iteration of ARBOx or self-studying the ARENA curriculum. It reminds me a bit of the ML4Good experience reports [1] [2] [3] [4]
My thoughts are similar to titotal's above: I found it hard to get through. There are a lot of stock Claude/LLM phrases, such as the "It's not this. It's this" and the usage of "Reality check", the use of slightly too uncommon synonyms, and the slightly too fancy vocabulary.
I think there's value in LLM feedback but when it rewrites whole sections it usually starts to feel annoying to me. I don't know if you have a "system prompt" for your Claude, but prompting it to preserve your voice much more, or just give you a specific list of improvements to implement might work. It could also be worth giving Claude some other things you've written as context for "your voice" and to give it strict instructions to avoid certain ways of writing.
Some of the things I did like from the Claude version because they made it more skimmable and easier to figure out what was happening:
I struggle with the same perfectionism, but reading your original post, it does not seem net-negative to me. It works very well for the personal reflection blog post format, and is much more enjoyable to read. If you were applying for writing/blogging positions it would probably be too unpolished, but even then they wouldn't care if you had older material that was less polished. If you're concerned about it you could probably mostly mitigate it by adding a disclaimer at the top that you wrote it in a limited amount of time.
You also can't really make a mistake in this kind of post because it is a personal reflection. It's about your experience, rather than e.g. you presenting research results or carefully arguing for an opinion which would be much higher stakes and would require more carefulness. You can't get your own experience wrong.
I think this post is very valuable as a resource for other people considering going to a future iteration of ARBOx or self-studying the ARENA curriculum. It reminds me a bit of the ML4Good experience reports [1] [2] [3] [4]