That looks cool. I have never been a fan of cyclomatic complexity analysis. At some point I suspect a perfect score would be branchless code, but that isn’t maintainable.
I prefer redundancy analysis checking for duplicate logic in the code base. It’s more challenging than it sounds.
> At some point I suspect a perfect score would be branchless code, but that isn’t maintainable.
That's a failure to understand and interpret computational complexity in general, and cyclomatic complexity in particular. I'll explain why.
Complexity is inherent to a problem domain, which automatically means it's unrealistic to assume there's always a no-branching implementation. However, higher-complexity code is associated with higher likelihood of both having bugs and introducing bugs when introducing changes. Higher-complexity code is also harder to test.
Based on this alone, it's obvious that is desirable to produce code with low complexity, and there are advantages in refactoring code to lower it's complexity.
How do you tell if code is complex, and what approaches have lower complexity? You need complexity metrics.
Cyclomatic complexity is a complexity metric which is designed to output a complexity score based on a objective and very precise set of rules: the number of branching operations and independent code paths in a component. The fewer code paths, the easier it is to reason about and test, and harder to hide bugs.
You use cyclomatic complexity to figure out which components are more error-prone and harder to maintain. The higher the score, the higher the priority to test, refactor, and simplify. If you have two competing implementations, In general you are better off adopting the one with the lower complexity.
Indirectly, cyclomatic complexity also offers you guidelines on how wo write code. Branching increases the likelihood of bugs and makes components harder to test and maintain. Therefore, you are better off favoring solutions that minimize branching.
The goal is not to minimize cyclomatic complexity. The goal is to use cyclomatic complexity to raise awareness on code quality problems and drive your development effort. It's something you can automate, too, so you can have it side by side with code coverage. You use the metric to inform your effort, but the metric is not the goal.
Sound like coding to the metrics would lead to hard to read code as you find creative and convoluted ways to multiply by one and zero so to pretend you aren’t branching
Measuring complexity shouldn’t lead to finding creative ways to avoid complexity, but instead be used as a tool to encapsulate complexity well.
It could be misapplied, of course, like every other principle. For example, DRY is a big one. Just like DRY, there are cases where complexity is deserved: if nothing else, simply considering that no code used in real world context can ever be perfect, it is useful to have another measure that could hint on what to focus on in future iterations.
This is a bit underwhelming because it gives a score and says, "Needs improvement", but has no real indication of what it considers problematic about a file. Maybe as a very senior TypeScript developer it could be obvious how to fix some things, but this isn't going to help anyone more junior on the team be able to make things better.
> This is a bit underwhelming because it gives a score and says, "Needs improvement", but has no real indication of what it considers problematic about a file.
I think you didn't bothered to pay attention to the project's description. The quick start section is clear on how the "score" is an arbitrary metric that "serves as a general, overall indication of the quality of a particular TypeScript file." Then it's quite clear in how "The full metrics available for each file". The Playground page showcases a very obvious and very informative and detailed summary of how a component was evaluated.
> Maybe as a very senior TypeScript developer it could be obvious how to fix some things, but this isn't going to help anyone more junior on the team be able to make things better.
Anyone can look at the results of any analysis run. They seem to be extremely detailed and informative.
I definitely did pay attention to the description and the playground. The "full metrics" give more information, but they're still just numbers and don't explain to someone _what_ they should do to make something “better”. Again, they're just numbers, not recommendations. Most people could probably just gamify the whole thing by making every file as small as possible. Single functions with as few lines as possible. That doesn't make code less complex, it just masks it.
For a refactoring project I've built the reports of the tool into the CI pipeline of our repository. On every PR it will create a fixed post with the current branches complexity scores comparing it to the target branch and reporting a trend.
It may not be perfect in its outputs but I like it for bringing attention to arising (or still existing) hotspots.
I've found that the output is - at least on a high level - aligning well with my inner expectation of what files deserve work and which ones are fine. Additionally it's given us measurable outcomes for code refactoring which non technical people like as well.
Maybe I'm doing things wrong, but I assume this tool is meant to focus on cognetive complexity and not things like code quality, transpiling or performance, but if that's true then why does this:
(score is 7)
function get_first_user(data) {
first_user = data[0];
return first_user;
}
Score better than this:
(score is 8)
function get_first_user(data: User[]): Result<User> {
first_user = data[0];
return first_user;
}
I mean, I know that the type annotations is what gives the lower score, but I would argue that the latter has the lower cognetive complexity.
I get the same overall FTA score of 7 for both of your examples. When omitting the return type (which can be inferred), you get the exact same scores. Not just the same FTA score. Also note that `Return<User>` should be just `User` if you prefer to specify the return type explicitly. That change will improve several of the scores as well.
> Also note that `Return<User>` should be just `User` if you prefer to specify the return type explicitly.
No? first_user = data[0] assigns User | undefined to first_user, since the list isn't guaranteed to be non-empty. I expect Return to be implemented as type Return<T> = T | undefined, so Return<User> makes sense.
I'm not sure how you can infer types on this. Even if you input an array of users from a different function. How would we know that data[0] is a User and not undefined?
Then why use TypeScript at all? Just write js and put a TS definition on top. TS is a linter anyway. Now that will make the code easier to read, and in the end it is the code that will be interpreated by the browser or whatever JS runtimes.
Not really. TypeScript introduces optional static type analysis, but how you configure TypeScript also has an impact on how your codebase is transpiled to JavaScript.
Nowadays there is absolutely no excuse to opt for JavaScript instead of TypeScript.
What about debugging. Or with proper sitemap the code on the client-side can be debugged with the right map to the TS code?
Just feels like an extra layer of complexity in the deployment process and debugging.
With source maps configured, debugging tends to work out of the box.
The only place where I personally saw this becoming an issue was with a non-nodejs project that used an obscure barreler, and it only posed a problem when debugging unit tests.
> Just feels like an extra layer of complexity in the deployment process and debugging.
Your concern is focused on hypothetical tooling issues. Nowadays I think the practical pros greatly outnumber the hypothetical cons, to the point you need to bend yourself out of shape to even argue against adopting TypeScript.
That looks cool. I have never been a fan of cyclomatic complexity analysis. At some point I suspect a perfect score would be branchless code, but that isn’t maintainable.
I prefer redundancy analysis checking for duplicate logic in the code base. It’s more challenging than it sounds.
> At some point I suspect a perfect score would be branchless code, but that isn’t maintainable.
That's a failure to understand and interpret computational complexity in general, and cyclomatic complexity in particular. I'll explain why.
Complexity is inherent to a problem domain, which automatically means it's unrealistic to assume there's always a no-branching implementation. However, higher-complexity code is associated with higher likelihood of both having bugs and introducing bugs when introducing changes. Higher-complexity code is also harder to test.
Based on this alone, it's obvious that is desirable to produce code with low complexity, and there are advantages in refactoring code to lower it's complexity.
How do you tell if code is complex, and what approaches have lower complexity? You need complexity metrics.
Cyclomatic complexity is a complexity metric which is designed to output a complexity score based on a objective and very precise set of rules: the number of branching operations and independent code paths in a component. The fewer code paths, the easier it is to reason about and test, and harder to hide bugs.
You use cyclomatic complexity to figure out which components are more error-prone and harder to maintain. The higher the score, the higher the priority to test, refactor, and simplify. If you have two competing implementations, In general you are better off adopting the one with the lower complexity.
Indirectly, cyclomatic complexity also offers you guidelines on how wo write code. Branching increases the likelihood of bugs and makes components harder to test and maintain. Therefore, you are better off favoring solutions that minimize branching.
The goal is not to minimize cyclomatic complexity. The goal is to use cyclomatic complexity to raise awareness on code quality problems and drive your development effort. It's something you can automate, too, so you can have it side by side with code coverage. You use the metric to inform your effort, but the metric is not the goal.
Sound like coding to the metrics would lead to hard to read code as you find creative and convoluted ways to multiply by one and zero so to pretend you aren’t branching
"When a measure becomes a target, it ceases to be a good measure."
You are free to interpret the score within the broader context of your own experience, the problem domain your code addresses, time constraints, etc.
Measuring complexity shouldn’t lead to finding creative ways to avoid complexity, but instead be used as a tool to encapsulate complexity well.
It could be misapplied, of course, like every other principle. For example, DRY is a big one. Just like DRY, there are cases where complexity is deserved: if nothing else, simply considering that no code used in real world context can ever be perfect, it is useful to have another measure that could hint on what to focus on in future iterations.
Oh, it does. That's what experience teaches you - that the measure is not the target.
Mildly related: TypeScript Call Graph - CLI to generate an interactive graph of functions and calls from your TypeScript files - my project.
https://github.com/whyboris/TypeScript-Call-Graph
This is a bit underwhelming because it gives a score and says, "Needs improvement", but has no real indication of what it considers problematic about a file. Maybe as a very senior TypeScript developer it could be obvious how to fix some things, but this isn't going to help anyone more junior on the team be able to make things better.
> This is a bit underwhelming because it gives a score and says, "Needs improvement", but has no real indication of what it considers problematic about a file.
I think you didn't bothered to pay attention to the project's description. The quick start section is clear on how the "score" is an arbitrary metric that "serves as a general, overall indication of the quality of a particular TypeScript file." Then it's quite clear in how "The full metrics available for each file". The Playground page showcases a very obvious and very informative and detailed summary of how a component was evaluated.
> Maybe as a very senior TypeScript developer it could be obvious how to fix some things, but this isn't going to help anyone more junior on the team be able to make things better.
Anyone can look at the results of any analysis run. They seem to be extremely detailed and informative.
I definitely did pay attention to the description and the playground. The "full metrics" give more information, but they're still just numbers and don't explain to someone _what_ they should do to make something “better”. Again, they're just numbers, not recommendations. Most people could probably just gamify the whole thing by making every file as small as possible. Single functions with as few lines as possible. That doesn't make code less complex, it just masks it.
For a refactoring project I've built the reports of the tool into the CI pipeline of our repository. On every PR it will create a fixed post with the current branches complexity scores comparing it to the target branch and reporting a trend.
It may not be perfect in its outputs but I like it for bringing attention to arising (or still existing) hotspots.
I've found that the output is - at least on a high level - aligning well with my inner expectation of what files deserve work and which ones are fine. Additionally it's given us measurable outcomes for code refactoring which non technical people like as well.
Maybe I'm doing things wrong, but I assume this tool is meant to focus on cognetive complexity and not things like code quality, transpiling or performance, but if that's true then why does this:
(score is 7) function get_first_user(data) { first_user = data[0]; return first_user; }
Score better than this:
(score is 8) function get_first_user(data: User[]): Result<User> { first_user = data[0]; return first_user; }
I mean, I know that the type annotations is what gives the lower score, but I would argue that the latter has the lower cognetive complexity.
I get the same overall FTA score of 7 for both of your examples. When omitting the return type (which can be inferred), you get the exact same scores. Not just the same FTA score. Also note that `Return<User>` should be just `User` if you prefer to specify the return type explicitly. That change will improve several of the scores as well.
> Also note that `Return<User>` should be just `User` if you prefer to specify the return type explicitly.
No? first_user = data[0] assigns User | undefined to first_user, since the list isn't guaranteed to be non-empty. I expect Return to be implemented as type Return<T> = T | undefined, so Return<User> makes sense.
You are correct if `noUncheckedIndexedAccess` is enabled. It is off by default (which is a pity, really).
I assumed `Return<User>` was a mistake, not a custom type as you suggest. But your interpretation seems more likely anyway.
Both score 7 now though.
This scores 6: function a(b) { return b[0]; }
This scores 3: const a = (a) => a;
Maybe because the type can be inferred and it potentially adds effort for changes in the future.
I'm not sure how you can infer types on this. Even if you input an array of users from a different function. How would we know that data[0] is a User and not undefined?
Then why use TypeScript at all? Just write js and put a TS definition on top. TS is a linter anyway. Now that will make the code easier to read, and in the end it is the code that will be interpreated by the browser or whatever JS runtimes.
> TS is a linter anyway.
Not really. TypeScript introduces optional static type analysis, but how you configure TypeScript also has an impact on how your codebase is transpiled to JavaScript.
Nowadays there is absolutely no excuse to opt for JavaScript instead of TypeScript.
What about debugging. Or with proper sitemap the code on the client-side can be debugged with the right map to the TS code? Just feels like an extra layer of complexity in the deployment process and debugging.
> What about debugging.
With source maps configured, debugging tends to work out of the box.
The only place where I personally saw this becoming an issue was with a non-nodejs project that used an obscure barreler, and it only posed a problem when debugging unit tests.
> Just feels like an extra layer of complexity in the deployment process and debugging.
Your concern is focused on hypothetical tooling issues. Nowadays I think the practical pros greatly outnumber the hypothetical cons, to the point you need to bend yourself out of shape to even argue against adopting TypeScript.
> (...) I assume this tool is meant to focus on cognetive complexity and not things like code quality, transpiling or performance (...)
I don't know about transpiling or performance, but cyclomatic complexity is associated with both cognitive complexity and code quality.
I mean, why would code quality not reflect cognitive load? What would be the point, then?
[dead]