-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Include Advanced Extra-Credit tests #129
Comments
I feel like this is a logical extension of what we offer currently; I'd be in favor of extending this material to cover more intermediate concepts like you describe. My only caution would be that we do so in a way that does not jeopardize the approachability of the repository for a beginner; and that, as the problems we "assess" gradually become more abstract, we take care to indicate when multiple solutions to a given abstract problem can be thought of as equally viable depending on context. The more abstract the challenge the more nuanced the potential solutions, which can be difficult to encapsulate in unit-test driven code. |
@Stephn-R there is a regex section already, would this be an extension of that? i do agree with @kadamwhite, but as an example, #128 i think is a valuable extension. i'm not sure i love "extra-credit" as the term, but we do need a strategy for extending what we already have. would love to hear more discussion, especially @jmeas and @rmurphey .. |
One possibility, as mentioned in a comment in #128 is to make a separate repo a la I feel like this is one of those more abstract problems that may have several equally viable solutions presented based on the context, as @kadamwhite eluded to earlier... 😆 |
@kadamwhite @ashleygwilliams @jacobroufa @everyone I 1000% agree with everyone. This would be better off as an extension library/npm module we could include. Any thoughts yet on how to make |
Hey @Stephn-R , I'm gonna copy and paste my comment from #128 : "I'm all for modularity, however, I don't think that a package manager is the way to go. It adds (in my opinion) an unnecessary level of complexity and refactoring of the existing code base. The existing setup actually lends itself to namespacing and modularity quite well. Different test suites can use different runner.html and in addition if you take a look at the existing package.json file: "scripts": { it easy to see how we could do something like this: "scripts": { In terms of the js-assessment already being fairly long, well, I don't think its too much to expect employers using the js-assessment test to pick and choose which tasks they want prospective hires to undergo. Its already easy to turn certain tests on and off by simpling adding an "x" in front of the corresponding mocha describe statement. I think one possible temporary solution (because it will take time to figure out which tests go together, what is intermediate, what is advanced, what we want to include in the core functionality etc) is to add a "additional tests" section. Here we an add more nuanced/advanced tests that we think are useful. If someone wants to use these tests, they can simply execute npm test-additional and people can also pick and choose which of the additional tests get run using the mocha "x" feature (we can default them all to off). This area could also serve as a "testing ground" to see which tests should make it into the core set of tests that js-assessment currently employers. Thoughts?" |
@richardartoul, I fail to see how modularizing the tool is a challenge. Thankfully, @ashleygwilliams and @rmurphey have done an _amazing_ job keeping the code very simple. Now that is not to say that your advice comes without merit. It does make it worthwhile to add more to the I will spend some time looking into creating a very simple extensibility library, but for the most part, I would recommend that you _not_ limit your mindset of what is or is not capable or compatible. These assessment tests serve the purpose of assisting other groups and organizations with a unified approach to testing ones performance with Javascript. As such, it comes with a degree of expectation that those who are developing the application itself are revered as experts in Javascript and can perform as such. |
@Stephn-R I'm not suggesting that creating a modularity tool is too much of a challenge, I was simply pointing out that, since like you said @ashleygwilliams did a great job of keeping the codebase clean, there are already great structures in place that lend themselves to modularity without forcing the user to deal with a package manager. I was emphasizing simplicity from the users standpoint, not the developers :) That being said, I was not aware that it was @ashleygwilliams goals to avoid modifying the original project. The readme stills lists this: "Submit a pull request! The tests are currently loosely organized by topic, so you should do your best to add tests to the appropriate file in tests/app, or create a new file there if you don't see an appropriate one. If you do create a new file, make sure to add it to tests/runner.js, and to add a stub for the solution to the corresponding file in app/. Finally, it would be great if you could update the answers as well." so I thought it was very much still a work in progress. If an extensibility library is the way the team wants to go, I'm happy to write tests that way as well. |
Hi there -- I'm the original creator of this, and just wanted to weigh in. @ashleygwilliams has been extremely generous in finding the time to take this project on, addressing old pull requests and bringing the project up to modern standards. The portion of the README that you quoted, though, is all me. I am all for seeing more tests contributed, but I'm wary of categorizing tests as beginner, intermediate, or advanced. The definitions of those terms vary widely, and I much prefer the current self-guided situation. The current situation also means that companies using this test have to actually understand what's being asked, vs. just saying "here, take the intermediate tests please." The original set of tests were, loosely, organized in order of increasing difficulty -- that is, tests early in a file tend to be easier than tests later in a file. I'd encourage sticking with that pattern, but I would be 👎 on further classification. |
hey everyone! so first off, thanks for all the comments! i think what i might be leaning towards doing here is creating a i see when we go to release new major versions of js-assessment, i imagine that we can make the decision to add/remove new categories from long run i would like to build a package like functionality. i'd also like to create a fully in browser experience. both of those experiences would make it easier to discover content that might otherwise get lost in a large thoughts? i'd like to have this sorted by the end of the week so we can clean up all the PRs and issues :) thanks again for all the dialogue! really love that ya'll care enough to contribute ❤️ ❤️ ❤️ |
@ashleygwilliams sounds like like a great plan, good mix of short-term practicality and long-term goals. |
Right on point @ashleygwilliams , I say lets go with it |
@ashleygwilliams , I think you've got a great plan. The contributing guide might shed more light on how we decide what goes in
+50000 |
I like the idea of a contrib/ directory for adding more tests that don't make sense in the current test files. I do think it's fine for advanced tests to live alongside more basic ones, assuming the files generally start with simpler tests and move on to more complicated ones. If a lot of tests emerge in the contrib/ directory, then we can figure out how to incorporate them. In the meantime, I'm closing this issue due to age. |
Perhaps we can consider adding new tests where the user can complete several tests that have a much higher difficulty. For example:
The text was updated successfully, but these errors were encountered: