Replies: 4 comments 3 replies
-
Besides the browser itself, we might also take a look in some contributor acquisition. We should define a clear homepage with information about who we are and what we are building (and maybe why). It should be a one-stop-shop for new people curious about the project, with all kind of links to different parts of the gosub infrastructure and documentation. |
Beta Was this translation helpful? Give feedback.
-
I searched for different JS Engines I found:
|
Beta Was this translation helpful? Give feedback.
-
Boa seems like an interesting option: https://github.com/boa-dev/boa Here is the conformance information: https://boajs.dev/boa/test262/ Seems actively developed and written in Rust. |
Beta Was this translation helpful? Give feedback.
-
Just to make you aware there is also this project regarding parsing CSS in Rust: https://github.com/parcel-bundler/lightningcss I hope it might give some inspiration. |
Beta Was this translation helpful? Give feedback.
-
The next steps in the Gosub browser project.
At this moment, we've achieved a very important milestone: both the tokenizer and the parser are passing all* the tests from the html5lib-tests suite, which means we can say that we can process html5 documents into (document) trees.
From this point on, there are a few things that we need to do, research and discuss in random order:
Figure out the rendering steps
Once we have a tree, we can start thinking about getting information from that tree onto a screen. Rendering though isn't easy and involves a lot of substeps which we must take. This, in combination with CSS style sheets can become a very complex beast which I think at this point needs a lot of research.
Optimizing the current parser / tokenizer
Even though the current parser parses, there is still a lot of room for improvements design wise. THe main architecture can be done better. For instance, trying to setup a bytestream - encoder - tokenizer - parser - treebuilder pipeline where we could theoretically swap out components if we need to (or just for experimenting with other/better code).
The current setup might also have functionality that we don't need, and is holding us back. For instance, there is a unoptimized system that counts newlines in order to generate correct positions in the tokenizer. This causes massive delays during the tokenizing of large blobs.
Also, we might want to switch the tokenizer to be more "greedy", where we not fetch a single character at the time, but as many as we can. This was the default case, but has changed in order to pass some of the tests. We might be able to check if we can fix these edge-cases and have a greedy tokenizer so less time is spend in parser overhead.
Having a network layer to fetch documents
Fetching a document from http is easy, except when its not. We use a library (reqwest I believe), but we want to have a better way of fetching documents. This includes DNS queries (including things like DoH etc) for hostname resolving, caching of data, dealing with TLS and such.
Have a user-agent that can use our gosub engine
The user-agent actually consists of two questions:
First, we have to create a feature-list for the user-agent. I reckon it will be a small list with the near essentials. My vision is to have a highly configurable browser (through settings), but have simple options for end-users who don't want to deal with these things. (more like a setup wizard with some questions that will set the correct settings for that user).
Secondly, we need to think about which UI toolkit we will be using. At the moment there are some small experiments going on with toolkits, but they are preliminary. There is also discussion about writing the user-agent as a web-app that is rendered within the Gosub Engine itself. This seems the way some other browsers do it as well. We probably have to think about mobile browsing as well, although things might be difficult for IOS, as it's pretty much forbidden to use a non-apple browser on their walled system.
Tasks:
Implement javascript
During parsing of a document, javascript can be executed that can change the document-tree. For this to work, we obviously need to support javascript. Instead of writing our own javascript interpreter, we can opt for one of the existing ones.
We should figure out which ones are available, and their pros and cons.
Implement DOM and other APIs
Once we have a working javascript interpreter in the parser, we can try and implement the many APIs that are available. Most notably will be the DOM, which is an interface to the document-tree in order to manipulate this tree. This also includes manipulation of the CSS tree (CSSOM).
We should start gradually with some smaller APIs, and work our way to the more complex ones.
Figure out "the rest"
Once we have all this done, we're nowhere of being ready. There are a lot of things we miss and need to figure out. The first step if figuring out in WHAT we need to figure out. We don't know exactly all components we need to implement in order to render sites correctly (think about youtube with direct video rendering for instance). Even though this is the last step, it would be good to try and figure out what we are missing and document this.
Beta Was this translation helpful? Give feedback.
All reactions