hello, and thank you for the introduction :) before we get started, i'll quickly point out that my goal or aim with this talk is to sort of brainstorm and share my thoughts, and hopefully spark some discussion about the topic. i'm *not* presenting anything concrete/complete today, so please bear with me as we get through the talk. :) one might ask: what is the difference between the (inter)net and the web, if any? are they not the same? after all, the two seem to often be used interchangeably. a reasonable definition for the internet is that it is an interconnected network of networks, where these networks and the nodes/devices comprising them use tcp/ip to communicate with each other. very briefly, tcp/ip -- also referred to as the internet protocol suite -- is a set of communication protocols consisting of tcp (transmission control protocol, for reliable, error-checked, and ordered delivery of data) and ip (internet protocol, for delivering packets from source to destination host using their four-octet ip addresses in the packet headers). the web is part of the internet, and consists of websites, such as https://www.gnu.org, which in turn consist of webpages, like https://www.gnu.org/gnu/about-gnu.html, which are written in an XML-flavoured markup language called the 'hypertext markup language' or html. well, html and css (cascading style sheets), for styling the page elements (things like changing colours, adding margins, setting background images or colours, animating things, and much more). ... oh, and there's javascript too. looking back in time, the web was initially a method or protocol for sharing with others. sharing documents, information, later also media, and so on. and it was a great one at that. you can still browse the very first website at this address: http://info.cern.ch/hypertext/WWW/TheProject.html to get an idea of the kinds of websites and webpages that people wrote and read back then. and it wasn't just for serious business or work. if you look at some websites from the mid 90s or early 2000s, you can see all kinds of personal and/or informal sites by enthusiast folks. as an aside, i'd like to give a quick shout out and say thanks to the internet archive folks for the awesome wayback machine, at web.archive.org, that has been archiving various websites starting way back in 1996 up to today, preserving them indefinitely for archival/historical purposes. and great for nostalgic purposes as well :) so, if you're wondering how a particular website looked back in the day, or if you actually do know how it looked and you miss it dearly, there's a chance it may have been archived by the wayback machine. i highly recommend giving it a try and see what (parts of) our (digital) world looked like years ago, and perhaps how it's come to be how it is today. i'm not ashamed to admit i've spent many hours on the wayback machine travelling way back in time and looking around the old web. early on, the websites were basically entirely textual, and gradually other media like images -- and much later, videos -- found their way onto webpages as well. which i think was great from an artistic perspective -- though, another quick side note, that is certainly not to discount any of the creative and beautiful ascii art by artists like joan stark, known by her initials jgs on her pieces, veronica karlsson (vk), and others. https://en.wikipedia.org/wiki/Joan_Stark https://web.archive.org/web/20160319004113/http://www.ludd.luth.se/~vk/ i'd highly recommend visiting joan stark's wikipedia page, which links to an archived copy of her website (thanks to the wayback machine), and other neat things like articles on the history of ascii art, how she got started with it, and links to ascii art tutorials by herself and other artists. anyway, closing the tangential parenthesis, so far we had text and later a mixture of text and media, but then along came javascript. before javascript, webpages were fully static. once a page was loaded by the user's browser, there was no way to change it afterwards. apparently this limitation was an annoyance among the growing web developer communities of the time, so netscape decided to add a scripting language to their browser netscape navigator, which happened to be the most-used web browser at the time. so, netscape hired Brendan Eich in 1995 to embed the scheme language, a lisp dialect, into their browser. yay!! right? ... right?? well, apparently before Eich could get started, netscape collaborated with sun microsystems to embed java into their browser, as a component language. so, netscape management decided that the new scripting language accompanying java needed to have a syntax similar to and close to java, rather than to scheme. *sigh* and so Eich created the first version of javascript, initially called livescript, in ten days, to be that scripting language. and, well, the rest is history. from a technical standpoint, there is a lot to be said about a language designed in 10 days, and its ecosystem with grave incidents like leftpad from a few years ago, and plenty of people have talked about it already: the two famous talks "the birth & death of javascript" -- where he examines the history of javascript from 1995 to 2035 -- and "wat" -- where he looks at some interesting features/quirks of the two languages ruby and javascript -- by Gary Bernhardt. though in my opinion javascript is not a great language, it certainly has some nice, functional parts to it, and many people have worked on gradually improving it over the years. but that is not the focus of this talk. instead i'd like to focus more on the ramifications of the addition of javascript to the web for privacy and user freedom. so we went from all textual websites, to a mixture of text and media, to ... arbitrary programs and software downloaded and run on your machine by none other than your web browser. this is literally the wwworst app store ever! highly recommended reading, short piece by fellow gnu hacker and free software activist Alex Oliva. https://www.gnu.org/philosophy/wwworst-app-store.html so, why are we okay with this and accept it when we know better? the current state of matters with javascript websites/applications should be rather alarming. especially to those of us who use gnu/linux regularly or exclusively. i believe one of the important reasons why gnu/linux and some other unix-like operating systems like the bsds have had a better overall security track record compared to some other operating systems, is the distro/distribution model, and the critically important, yet often thankless, role and work of the distro package maintainers. distros, depending on their size, have one or more trusted developers and/or maintainers who take up the tasks of overseeing and maintaining their distro's infrastructures, and packaging and maintaining various pieces of software in their repositories for users to download. This includes the responsibility of keeping malware out of their repositories, and in the case of wholly free distros also keeping out any nonfree or proprietary software, which is often malware. so, on gnu/linux and other unix-like distros, we trust and rely on our distro package maintainers to have our best interest at heart when it comes to keeping malware and/or nonfree garbage out of our distros and systems. of course, those of us who are more tech-savvy can, and should, check on the work of the distro developers and maintainers to try and verify that they indeed do as they say. and even help them out if we can! helping create and maintain packages for our distro is a great way to contribute to the free software community, and putting our money where our mouth is. but again not all of us are tech-savvy or interested in doing these tasks, so the package maintainers and other occasional contributors have a very important role to play. on top of all of this, our distros do not all of a sudden send us arbitrary programs to be installed and run our machines; we are in control of when, how, and exactly what pieces of software we fetch and install from our distros. so, naturally, i'd like to see similar notions and levels of control, trust, and consent in the browser for javascript applications. i'd like to be able to use an inherently different mode of browsing, where javascript is disabled by default (much like how 'native' programs don't arbitrarily get installed and run on my machine), and i could selectively pick and choose which ones to allow the browser to download and execute. it may be possible to achieve something somewhat like this using an amalgamation of several browser extensions like gnu librejs, noscript, and ublock origin (which are great in their own light), but i don't think a fully smooth experience is currently possible. i think it would be great if something like this were to be a web browser's built-in core feature, but i'll happily take a browser extension for it as well. also, browsers should facilitate and normalize the ability for users to use custom javascript programs in place of one or more programs shipped by a particular website. this is again possible to some extent using browser extensions -- and we have distributed such extensions with gnu icecat (gnu's version of firefox) -- but a dedicated and convenient interface to do this would be nice. for this to be easier and more viable, there also needs to be a change in the status quo and cultures around web services and their api endpoints. companies or people running web services should provide well-documented and publicly-accessible apis so that people could write and use custom programs for communicating with and using their service. simply throwing a purely technical answer at this issue is not going to solve it once and for all. it seems the universe -- actually, fellow free software hackers/activists -- have answered my wishes! yesterday, the first day of libreplanet 2022, i stumbled onto a great talk by Nicholas Johnson and Wojciech Kosior (i apologize if i mispronounce your name) titled "taking back the web with haketilo", where they introduced an awesome new browser extension named haketilo that seems to aim to do exactly what i was describing: putting the power of control in the hands of the user as to what pieces of javascript code from a website the user would like to allow or disallow from being run, as well as easily adding or installing their own custom javascript code to run when visiting a website. i initially had this whole bit in my notes about having a collectively-edited and -maintained knowledge base of widely-used javascript libraries and/or programs to be regularly audited by programmers, where the listed programs are guaranteed to be freely licensed and not require certain permissions or use those permissions in a certain way. but from what i understood about haketilo from yesterday's talk, it might be doing just that. excellent! perhaps one point i'll mention is that i believe it's very important that we aim to have the repository be truly transparent and accessible to all users for use and edits, and *not* implemented as a walled garden where a fixed select few retain total and indefinite control over it without any possibility for outside oversight/influence. public wikis come to mind as neat tools for publicly maintained and editable sources of knowledge, but their use for security-, privacy-, and/or freedom-sensitive code may not be wise, and some level of supervision and review in order to establish a notion of trust might be desirable. another idea i'd like to put out there is: perhaps something like a permission system would be a decent starting point, where the user could select what permissions (corresponding to various kinds of tasks, such as making remote network requests) they would like to grant to a particular code snippet served by a specific website. i'd imagine such a feature would be best implemented in a hardened way by the browser itself, though it might be possible to implement some variant of the idea as a browser extension as well. i missed the earlier parts of the talk, so i'm looking forward to watching it in its entirety once the conference recordings are published. please check out the talk entry on the libreplanet 2022 website linked on the slide, as well as the home page of the haketilo project itself, and try it out and contribute to it if you're interested! Nicholas, Wojciech, if you're watching this talk, thanks for the shout out during your talk, and kudos for your exciting work on haketilo! please email myself or the gnuzilla-dev mailing list to discuss the inclusion of haketilo in future releases of gnu icecat. :) okay, so far i talked a lot about the web. but, how about other, simpler protocols like gopher, gemini, spartan, etc.? i really like the good old gopher protocol, and a few months ago i started hosting a gopherhole (analogous to a website in gopher land) and a phlog (gopher log, like a weblog) on the server hosting my website, and have really enjoyed its simplicity. its being centred around plain text draws me in, and i've found that for the kind of topics i tend to read or write about is most often rather sufficient. the idea of file types being part of the requested address was unusual to me at first but i've gotten quite used to it by now and don't mind it. but i've found the lack of some things like optional but standardized encrypted sessions, lack of standardized support for virtual hosting (serving multiple domains from the same server), and perhaps lack of mime types sometimes left me wishing for a bit more. but these have not been deal-breakers for me, so i continue hosting a gopherhole of my own and reading others'. the gopher daemon software i currently run on my server is not 100% standards-compliant, and supports some widely-accepted widely-used nonstandard behaviours/features like the 'i' information message canonical item type in gopher menus (also called gophermaps), and using utf-8 as the default output character set instead of us-ascii. i think that's totally fine. for essentially all english text i write, using utf-8 is transparent to the user since i mainly only use the us-ascii portion; and for non-english text, many clients correctly decode and display utf-8 text. as for gemini, started in 2019, i have not used it much for a few reasons. first i'll mention some of the things i like about gemini, namely that: it supports virtual hosting (using sni), has mime types, uses utf-8 encoded requests, and allows a lang parameter for indicating the language of the gemtext document being served. but as for the reasons why i have not used it much, it's mainly that i found gopher to be good enough for my purposes, and that i didn't quite agree with some of the aspects of gemini. such as the fact that their markup, gemtext, uses a long unwrapped line for each paragraph (whereas i much prefer neatly hard-wrapped lines at around 60 or 70 characters), and their mandating the use of TLS encryption (note that i am *not* against encryption -- quite the opposite -- but i'd still like to have the ability to not always use TLS encryption and/or use a different encryption mechanism, such as SSH). these are some reasons as to why i don't currently use gemini over gopher. per the project leader, gemini is heavier than gopher, lighter than the web, and will not replace either. so, thus far i've used gopher and it's been able to meet most of my needs, and when not, i fall back on the web, in the form of very simple webpages. and as for spartan, i like that TLS during transport is not mandatory, but dislike that it doesn't seem to be specified in their specification at all as a possible choice. i also don't really like the fact that, like gemini, it uses gemtext as the main document type; and that unlike gemini, it doesn't support using a lang parameter to indicate the document's language. for these reasons i haven't really used spartan either. this is of course not to discredit or discount any of these projects and efforts, or the work of their authors and contributors. you should definitely check them out. i think projects like gemini and spartan are interesting in their own light when we look at the accounts of experiences and attempts at envisioning how a 'modern' gopher protocol would look if designed today. but i've found that, for my personal taste, they seem to add much on top of gopher, yet still be far away from addressing some legitimate use-cases currently covered by the web. such as embedding images and videos, neatly typeset math equations, and accessibility features like descriptive text for code blocks (e.g. useful when including an ascii art block, or an image or rendered math equation if they were supported). these and other similar points, combined with the fact that these protocols are designed today yet they have some (deliberate) shortcomings or exclusions, makes some folks wary of them, perceiving them as exclusionist or elitist. i think that's an interesting point that may have some truth to it. on the one hand i sympathize with being tired of the modern web and not wanting to have anything to do with it. but does that, or should that, include the very foundation f it, the hypertext transfer protocol (http)? some folks in favour of the complete separation of efforts like gemini from the web by using an entirely different protocol instead of http argue that doing so draws a line in the sand, where on one side of the line people can browse and visit arbitrary links without having to worry about accidentally step into the other side of the line, where pages carry all kinds of javascript-based user-tracking and user-surveilling garbage. i don't entirely agree. first, i believe much like those who run a web server, someone who hosts e.g. a gemini server could also surveil the users to some extent and learn some basic information about them. further, with a browser with an eye towards user privacy and freedom as described earlier in the talk, one could choose to only accept javascript that is freely licensed and trusted. or more generally, if one could define rules to selectively block or accept files from different domains based on their mime type, one would have control of exactly what file types would be requested from the server depending on the website. thus helping address the issue of crossing the boundary between sites with differing degrees of trust, head on. for these reasons, and under the above described conditions, i believe having a hypothetical replacement to the web sit alongside it and be based on http would not only not be a terrible outcome, but have the benefits of a common, shared space, such as increased chances of exposure and discovery by new users, and ease of getting started and use by new users if implemented (due to ease of directly linking to material), increased chances for exposure of new users and their exploring and discovering the new replacement, especially if the material is semi-usable/accessible even without anything special installed in the user's browser, and so on. here is one possible idea: use plain text files, with a few simple syntactic rules for things like bold text, titles, images, links, quotes, and so on. then, for an improved experience add extra convenience using a browser extension. i've seen at least one extension for firefox-based browsers that can linkify urls based on patterns, or fetch the image file they point to and display them. so something like this is rather doable. note that the material is still very much readable and easily accessible by the user, without them having to install any extensions or separate programs. perhaps if we want to get a bit fancier, we could have math equations, which in browsers could be rendered using mathml, and perhaps exported to tex and rendered by tex when using some other custom client or browser. maybe s-expressions turn out to not be convenient for this, and instead we go with the tex syntax directly, or some other syntax. i think all of this so far is doable as a browser extension, maybe even better if implemented directly as part of the browser. i believe with a small enough and finite scope, but one that still accommodates a large number of people and their legitimate use-cases, we could arrive at something that could both be implemented as one or more extensions or direct patches for existing web browsers, and be reasonably implementable by one or few people. from the examples i've showed, it maybe evident that i quite like plain text and s-expressions and lisp/scheme languages, and i really think we missed an opportunity to have better languages back in the 90s by going with html and javascript. our replacement for the web doesn't have to follow those same footsteps or make the same choices. if we are indeed re-imagining how sites look and work on a fundamental level, we don't have to assume the burden of html or javascript. whatever we go with, i think a full programming language should be used only when actually needed (not be required just to be able to read some text, as is sadly the case with many websites today), have individual code blocks or the entire page carry a small license tag, and only be run with the user's informed consent, perhaps incorporating some of the earlier ideas about permissions and what kinds of functions are allowed by the user to be run. in general, the users should have the final say about how their browser/client displays pages. this is something that for instance several gemini clients i had tried get right. i want to be able to easily switch between a light and dark theme, use whatever font family and font size i like, if, when, and where i would like to allow some code blocks in a document to be run, and so on. oh, and more idea about transports: http does not have be *the only* protocol we use. perhaps we could supplement it with others, like nncp or even git. Alex Schroeder and Solderpunk have had some interesting ideas on this topic that i think are well worth a read. okay, so with all of that in mind, what can we do? how can we help? here are some ideas for each of us. learn more about, and use freedom- and privacy-enhancing browser extensions like gnu librejs, jshelter, haketilo, and ublock origin. another shout out to the "taking back the web with haketilo" by Nicholas and Wojciech yesterday, where they mentioned and described several other similar extensions. "saying no to unjust computing even once is help". so, try to say no to nonfree javascript whenever and as much as you can. if you are a frequent user of a website that serves nonfree javascript, and especially if it breaks without it, you could try writing an email to the webmaster/maintainer of the site and ask that they license their javascript code freely and/or make their site work without it. check out and participate in small web or small net initiatives, as well as tilde or pubnix communities like tilde.team and sdf.org. as a developer, please release and mark your javascript code as free software. preferably do it in a way that can be recognized by gnu librejs. there are plans to improve how librejs recognizes free software, with the goal of hopefully detecting more free/libre javascript code as free. check out protocols like gopher and gemini, see what they have to offer, and think about what an http-based approach could look like. experiment with text-centric approaches, that aim to sidestep html and javascript and their issues as much as possible. envision what a different embedded scripting language in documents could look like, for instance gnu guile scheme, with a focus on freedom, security, and permission model. if we want to replace the web with something better, we need to genuinely consider and try to accommodate whenever reasonably possible the legitimate use-cases and needs of people from a documentation/information sharing system. you can contribute to other, smaller browser projects like luakit, netsurf, and visurf and help develop them to make a statement against browser duopoly/monopoly. if you can, please help maintain liberated and freedom- and privacy-enhanced versions of firefox, such as gnu icecat, parabola's iceweasel, and trisquel's abrowser. unfortunately, today the web and its browsers seem to be increasingly hostile towards user privacy and freedom. so, the gnu/linux distros following the gnu free system distribution guidelines package and maintain their versions of firefox with better privacy settings. this is wonderful, but i believe we need to try and share as much as work as we can in these efforts, to help make the best of limited time and resources. i have some hopes and ideas about unifying our efforts, which i plan to share on the gnuzilla-dev mailing list and the gnu-linux-libre list for coordinating these freeing/liberating efforts. on your websites, aim to use as little javascript as possible, and when you do please license it freely, and follow principles like 'progressive enhancements' to make sure the heart of the work/material remains accessible and readable even without javascript or css. lastly, if you happen to work on either side of the current web browser duopoly, or work on web standards in general, please consider: do we really *need* all the new complexity? 12 years ago, mozilla removed support for gopher on the grounds of reducing complexity and attack vectors. do the current web and its browsers resemble anything but complexity? firefox and chromium are now each more than 30 million lines of code, on the same order of magnitude and in fact currently larger than the kernel linux. new code and complexity keeps being added, while at the same time, user freedom, control, and often privacy seem to be gradually lessened and eroded. to summarize, the net i'd like to see beyond the current web 1. does not facilitate tracking users; 2. has freedom, control, and consent built in; and 3. is in many ways simpler, has a reasonable and finite scope, and has several server and client implementations. thanks! bandali@gnu.org kelar.org/~bandali references ---------- http://info.cern.ch/hypertext/WWW/TheProject.html https://web.archive.org https://en.wikipedia.org/wiki/Joan_Stark https://web.archive.org/web/20160319004113/http://www.ludd.luth.se/~vk/ https://wiki.haskell.org/The_JavaScript_Problem https://en.wikipedia.org/wiki/JavaScript#Security https://en.wikipedia.org/wiki/Browser_security https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=javascript https://www.gnu.org/philosophy/wwworst-app-store.html https://libreplanet.org/2022/speakers/#5790 https://hydrillabugs.koszko.org/projects/haketilo https://www.gnu.org/philosophy/javascript-trap.html https://www.gnu.org/philosophy/who-does-that-server-really-serve.html https://www.fsf.org/campaigns/freejs https://en.wikipedia.org/wiki/Gopher_(protocol) https://gopher.floodgap.com https://gemini.circumlunar.space https://spartan.mozz.us http://www.aaronsw.com/weblog/000574 http://www.aaronsw.com/2002/rss30 https://alexschroeder.ch/wiki/2018-12-01_Thinking_about_the_real_RSS_3.0 https://alexschroeder.ch/wiki/2022-02-08_NNCP_distributed_text gopher://zaibatsu.circumlunar.space/0/~solderpunk/phlog/low-budget-p2p-content-distribution-with-git.txt https://gopher.tildeverse.org/zaibatsu.circumlunar.space/0/~solderpunk/phlog/low-budget-p2p-content-distribution-with-git.txt https://www.gnu.org/philosophy/saying-no-even-once.html https://tilde.team https://sdf.org https://www.gnu.org/software/librejs/free-your-javascript.html https://www.gnu.org/distros/free-system-distribution-guidelines.html https://bugzilla.mozilla.org/show_bug.cgi?id=388195 https://drewdevault.com/2020/03/18/Reckless-limitless-scope.html https://drewdevault.com/2021/09/11/visurf-announcement.html https://drewdevault.com/2021/09/27/Let-distros-do-their-job.html https://en.wikipedia.org/wiki/Progressive_enhancement https://kelar.org/~bandali/talks/net-beyond-web.html gopher://kelar.org/0/~bandali/talks/net-beyond-web.txt marked with cc0 and dedicated to the public domain