-
-
Notifications
You must be signed in to change notification settings - Fork 33.6k
gh-135676: Simplify docs on lexing names #140464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Co-authored-by: Stan Ulbrych <89152624+StanFromIreland@users.noreply.github.com> Co-authored-by: Blaise Pabon <blaise@gmail.com> Co-authored-by: Micha Albert <info@micha.zone> Co-authored-by: KeithTheEE <kmurrayis@gmail.com>
willingc
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Outstanding document @encukou. I had one small suggestion to be a bit more explicit on the normalization example with number.
Doc/reference/lexical_analysis.rst
Outdated
| This means that, for example, some typographic variants of characters are | ||
| converted to their "basic" form, for example:: | ||
|
|
||
| >>> nᵘₘᵇₑʳ = 3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be helpful to add an explicit comment that the normalized form of nᵘₘᵇₑʳis number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this look good?
|
There was an insightful conversation in #140269. I'll update this PR to make things even clearer. |
willingc
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @encukou
willingc
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work @encukou!
|
Thank you for the review! @malemburg, do you also want to take a look? |
|
Thanks @encukou for the PR 🌮🎉.. I'm working now to backport this PR to: 3.14. |
|
Sorry, @encukou, I could not cleanly backport this to |
|
GH-142015 is a backport of this pull request to the 3.14 branch. |
This simplifies the Lexical Analysis section on Names (but keeps it technically correct) by putting all the info about non-ASCII characters in a separate (and very technical) section. It uses a mental model where the parser doesn't handle Unicode complexity “immediately”, but: - parses any non-ASCII character (outside strings/comments) as part of a name, since these can't (yet) be e.g. operators - normalizes the name - validates the name, using the xid_start/xid_continue sets (cherry picked from commit 2ff8608) Co-authored-by: Petr Viktorin <encukou@gmail.com> Co-authored-by: Stan Ulbrych <89152624+StanFromIreland@users.noreply.github.com> Co-authored-by: Blaise Pabon <blaise@gmail.com> Co-authored-by: Micha Albert <info@micha.zone> Co-authored-by: KeithTheEE <kmurrayis@gmail.com>
This simplifies the Lexical Analysis section on Names (but keeps it technically correct) by putting all the info about non-ASCII characters in a separate (and very technical) section.
It uses a mental model where the parser doesn't handle Unicode complexity “immediately”, but:
id_start/id_continuesets (referred to in previous sections as “letter-like” and “number-like” characters, with a link to the details)This also means we don't need
xid_start/xid_continueto define the behaviour :)📚 Documentation preview 📚: https://cpython-previews--140464.org.readthedocs.build/