Nico's Microblog
Below you'll find an assortment of thoughts, ideas, webmentions I've sent, daily logs, bookmarks, and concepts that I wanted to share. If you'd like to stay up to date with this feed, you can subscribe with RSS here.I think what makes the web such an interactive and captivating platform is how much potential it has. It started off as a way to share simple documents across a network, and it’s evolved to be so much more than that. With this expanding essay “experiment” I’ve created, I’ve taken a look at how essays or articles could potentially be explored interactively. Before I explain it, check it out in the link above, then come back here. I’ve picked up a lot of inspiration from a lot of different demonstrations like these. The idea definitely did not originate with me, but I feel like I added a couple of twists so I can do my part to help these sorts of ideas progress! In my demo, we start off with a short little essay. As the reader clicks on different highlighted words, the essay expands before your very eyes! I added short, medium and long buttons depending on whether you want to explore the essay in a more interactive fashion or simply see the end result once everything is expanded. In one of the examples I linked above, it starts off with a few words and each time a word is clicked, more words are added. What I thought what interesting about that implementation is that you could add more data to a single word, so you might click the word “I” multiple times and get multiple different phrases out of it. In my system, you can do just that. In addition, from the second example I linked above, I like how the word you clicked would explain what type of expansion it would be. For example, if I wanted more exposition, I’d click the word that said exposition. In the other system, there’s no way to know what sort of extra information you’re going to get. My expanding essay doesn’t work exactly like the other examples I linked, and that’s the whole point! I wanted to explore this format and add my own little personal touch to it too! I wrote a small little markup language to create these interactive text documents. I tried to make it as easy as possible to use. Essentially, you wrap words with square brackets and separate the text from the button with a vertical bar. So if I wanted to add exposition to this sentence I’d write something like [exposition| here is where I explain more...]
. If you want a word to be able to be clicked multiple times you simply add the extra text in parentheses like so [exposition| here is where I explain more (even more words of exposition)]
. Now what makes this little markup language a lot more powerful is the ability to nest as many of these little pieces of markup as you want so you can create lots of hidden content that needs to be clicked a lot in order to get all the information. That’s it! I hope you enjoyed this little demo. I think there’s lots of potential with how this format could be made even better and I’m not saying everything should be written like this, but I do think that it’s important to explore new ways to present content.
I’m really happy to note that with the extra time I’ve had on my hands lately, I’ve revamped my entire front page and the navigation bar to better showcase the different work I have on my site! I think the seperation between the different types of content that I post here is a lot clearer now and the homepage has easy access to all my latest doings.
Today is my last day of my 30 day "sharing what I learn" challenge. I want to spend this last one talking about one more symbol from Teller's Logic Primer, the book on formal logic that I spoke about a week ago. That symbol happens to represent "bi-directionality" or the idea of "if and only if" and it can look like this "≡". If we remember from before, the ">" symbol represents the conditional in Teller's Logic Primer. So if we have the sentence "X>Y" that means "if X then Y". The sentence "(X>Y)&(Y>X)" is equivalent to "X≡Y". This makes this "if and only if symbol" work like the conditional, but both ways. As an example, if the sentence A represents "I like chocolate" and B represents "I will buy chocolate". If we wanted to say "if I like chocolate then I will buy chocolate" and "if I buy chocolate then I like chocolate", we could succinctly represent that as "A≡B".
Although this is the last post in this series, this isn't the end of me posting on this stream, not by a long shot. If you're interested in more notes here, check back when you can or subscribe to my RSS feed! Until next time!
So today I’m going to be speaking about reply-contexts! If you’re replying to someone else’s site on your own site, you’ll want to make your intent clear! We can do that using our lovely microformats! This allows others to interpret our replies correctly. Here’s the markup for it:
<div class="h-entry">
<div class="u-in-reply-to h-cite">
<p class="p-author h-card">You</p>
<p class="p-content">I love the web!</p>
<a class="u-url" href="permalink"><time class="dt-published">2020-02-08</time></a>
</div>
<p class="p-author h-card">Me</p>
<p class="e-content">I do too!</p>
</div>
Today I wanted to speak about RelMeAuth! This is a form of authentication that you use on your own personal site to login as “yourself” into multiple services. What’s interesting about this protocol is that it allows you to make your domain name the core of your online identity and to rely on the security as well as safe practices of external authentication providers to actually login to a service. It’s incredibly easy to enable RelMeAuth for any supporting external site. All you need to do is link in your "<head>" element to some external services like so:
<link rel="me" href="mailto:myownemailatanyservice@example.com">
Today I wanted to speak about a new feature that may be coming to your web browser soon: scroll to text fragment! This feature allows you to share a URL that will immediately scroll to certain text on a page. Let's say you wanted to share a Wikipedia article, but the part you wanted to focus on wasn't a header. No problem, you can link to any sentence in the document! This new form of linking will be especially handy for long documents. It'll be exciting to see all the use cases people will come up for it! Until next time!
Prolog is a fascinating language I am starting to discover. This interesting language is an example of logic programming. Lots of modern programming languages derive lots of their ideas and syntax from C (which was influenced in turn by AGOL). Logic programming throws a lot of these ideas out the window and focuses on routing itself on predicate/first-order logic, which is a type of formal logic. Since I've recently gotten interested in formal logic as a discipline, I thought better understanding Prolog would help show me how you could use formal logic. Although it's not hugely used in the industry, it's a really cool example of how different things can be in programming, and you really have to wrap your head around a whole new way of thinking. A core premise of Prolog is the idea of facts as a key component of programming. Prolog is based around the idea of creating queries of these facts. A fact may look something like this:
human(tom).
That's a fact. I've said that "tom" is a human. Now we can ask if "tom" is a human and use the fact in a Prlog interpreter. True values will print "Yes" and false values "No". So we ask if "tom" is a human by writing our fact, and Prolog we'll tell us if we're right or not. We'd do that as so:
human(tom).
Prolog would say: "Yes" in response to that. This is a very simple example, but you can already see that logic programming is fundamentally different than most programming. If this interests you, you can take a look at the book: "Adventures in Prolog" which I'm currently working my way through now. Until next time!
Today I'm going to be discussing a topic I've learned a while ago, but I thought I'd share today since I've been speaking about CSS for the past few days. Let's talk about CSS in React! React.js is a front-end view library that allows you to easily create user interfaces for the browser (and potentially iOS and Android through React Native). When you're normally styling some elements, you write down your styles in a CSS file, write your selectors and that's that. Yet in React, there's lots of opportunity to "componentize" your elements. Let's take a look at the different ways to use CSS in a React project! From styled components to just using a plain CSS file there's tons of options!
Using plain CSS is probably one of the simpler solutions. You need to make sure your CSS is included on your site and then you can use classes and ids on your elements in order to style them. If you're using "create-react-app" a way to import your CSS is as follows:
import "./myFile.css"
<div className="main"></div><input id="importantInput"/>
CSS modules are a slightly more complicated way to use your CSS files. This method is built-in if you're using "create-react-app". To import a CSS file using CSS modules you'd write an import statement like this:
import style from "./myFile.css"
<div className={style.main}></div>
If you'd rather not use a CSS file, you can write your CSS as an object and apply it as follows:
function App(props){
const obj = {
backgroundColor: "blue"
}
return <div style={obj}></div>
}
Finally, another option available to you is styled components. An example of a styled components library is "styled-components". There's lots to choose from so find one that best fits your needs.
Good luck and use whatever tool best suits you! Until next time!
h1 {
color: red;
}
input[type=checkbox] + label {
color: red;
}
[aria-current="page"] {
font-weight: bold;
}
Today I’m going to be talking about microformats again, like I did yesterday. So different microformats vocabulary, which is a collection of attributes, have different prefixes depending on their meaning. A big one you’ll see is things starting with “p-” or “h-”. This got me curious, why do they have different prefixes? Anything that is a “root” property starts with an “h-”. So that means when yesterday we looked at how you could represent a profile using the “h-card” attributes, the “h-card” class was used to envelop the whole card. Anything with a “p-” prefix which is also very common is simply plain text, like a plain text summary or name. If you want to learn more about the different prefixes and microformats you can check out this resource to learn more. Until next time!
Today I’m going to be discussing microformats! This allows you to add little bits of context to website which allows others to be able to read it with ease. You might use the “h-card” vocabulary to create a small element that showcase your photo and name. Or use “h-entry” to markup a post. The beauty of this simple format is that you can easily add it to your site and immediately start seeing tangible benefits. For example, microformats give you the ability to have your website interpreted by a microsub server like we discussed yesterday without having to create a special RSS feed file. Microformats2 works by adding classes to the HTML markup you’ve already created that describe what it does.
A simple example may look something like this:
<a class="h-card" href="mywebsitehere.example.com">
<img src="/photo.png" alt="" />My name
</a>
Take a look at what microformats can do and you won’t be able to stop finding more uses for it! Until next time!
Today I want to talk about a different piece of another protocol that's current being developed by the Indieweb community. Microsub is a piece of "plumbing", which is something that a user doesn't see directly, that allows you to follow different sites and recieve updates when they make new posts. It works like an RSS reader, but just like yesterday's Micropub is loosely coupled. The server is the part that takes feeds/sites and subscribes to them. It cleans up that data and makes it easily presentable. All the user data is stored in the server. You can also create channels which contain certain posts. The client for the reader displays your timeline which includes the various posts from the feeds you're subscribed to. You can also view the channels you've created in your reader. The beauty of this protocol is that they are completely seperate parts. Again, like yesterday, this means you can use different pieces developed by different people and put them together without a problem!
I really think there's a lot of potential with this idea and I encourage you to check it out. It's amazing to see how different people contribute to try to make the web a better place. Until next time!
Today I wanted to discuss a really cool piece of Indieweb “plumbing” that I use on this site, the micropub protocol! This nifty tool specified how a micropub server and client can communicate in order to create, update and delete posts for your website. What’s great about this, and a lot of other Indieweb protocols, is the loose coupling between frontend and backend. Essentially, you can create your own micropub server (like I did) and use it with tons of different clients developed by other people! Some of them focus on just general plain text posting, while others enable things like check-ins, likes, bookmarks or keeping track of a book you’re reading! This also works the other way around too! You can use someone else’s micropub server on your own site to update posts and create your own client. Or use both a client and server developed by someone else too! Whatever you decide is up to you and that’s the beauty of interoperability. It gives you choice.
This protocol isn’t just an idea, it’s used by countless people all the time. There’s clients for your phone, and browser so you can post from wherever you are. If you want to implement your own versions, micropub.rocks makes it super easy to see if you’re following the specifications, which are the rules that all the clients and servers follow. If you can pass all those tests, then your implementation will probably work with anyone else’s!
If this piques your interest, I highly recommend you give it a look! Implementing a compliant server or client, drawing up a design or writing about the protocol are all great projects you can give a try with. Good luck and until next time!
After two weeks straight of posting daily, I unfortunately missed out on yesterday. Not to worry though, today’s topic will hopefully be interesting enough to make up for it. Over the past few days I’ve been speaking about formal logic, particularly sentence logic, as I’m being taught through Teller’s Logic Primer. Today I’m going to introduce one new concept which are the final ones to understand before we look at how arguments are structured in sentence logic and how we can use it to create arguments. The symbol we will be discussing looks like this “AↄB” where A and B are two sentences with a true/false value.
This little symbol which looks like a backwards “c” is the conditional connective. A connective connects two different sentences and this one does so in an interesting way. We could “translate” AↄB into English by saying “If A then B” or “A. Therefore B.” as explained in the book. This symbol allows us to express the idea of the conditional in sentence logic. Before, we could only say “and”, “or” and “not”. We’ve know added “if” to our repertoire. So how do we use this? Let’s come up with two example sentences. A will represent the phrase “I like chocolate” and B will represent the phrase “I will buy chocolate”. If we transcribed the above into English we’d get the sentence: “If I like chocolate, then I’d buy chocolate”. Now let’s say I add two negations, one to to “A” and the other to “B” like so: “~Aↄ~B”. With that change we’re now saying: “I do not like chocolate, therefore I will not buy chocolate”.
Each connective gives back a true or false values based on the sentences it’s connecting. The conditional connective will give us a true value in every scenario except when B is false. I hope you’ve enjoyed this look into a new connective and are excited because I’m going to be talking a bit about how you can write simple deductive arguments based on Teller’s book soon!
So two days ago, I discussed a bit about formal logic. Today, as I continued with Teller's Logic Primer, I came across an interesting lesson that I think will help clarify some things from yesterday. As discussed previously, I'm currently learning about sentence logic, which allows you to take different phrases, transcript them into a formal notation, and then find things that are true or false about those premises. Logic allows you to see things cleanly and pricely. Yet, as I learned today, that precison comes at a cost. As Teller explains, we lose a bit of expressiveness with formal logic notation. The "and" used in formal logic doesn't translate exactly to the "and" we use in everyday speech. The same goes for "or". For example, a sentence like "I prepared a sandwich and ate it", transcribing it would lose some meaning. Sure both parts of the sentence could be true and can have a true or false value, but I'm also saying that one happened before the other. I couldn't have eaten a sandwich before preparing it. That aspect is lost when if I wrote it using formal notation. In English, "and" doesn't just have a "truth functional" purpose, that is, it's a function that takes a true or false value. It can be used in other ways as well that can't be translated as cleanly. I think this was an interesting lesson in terms of precision and the precise nature that logic requires. Anyhow, I hope you enjoyed that. Until next time!
Today's daily learning tidbit will focus on exploring a small text from the Iliad. This ancient Greek epic laid the common foundation for ancient Greek culture and their civilization. It tells the story of the destruction of Troy. Homer, the supposed author of the Illiad, weaves a tale involving heroes, gods, war, and glory. The reason this myth has been passed down for millenia is a testament to it's beauty and virtues. I've been exploring ancient Greek culture, and especially this myth, through the course "The Ancient Greek Hero" which is freely avaliable. You don't need to know ancient Greek or really anything about their culture to follow along.
Each week in the course covers a few different passages. Today I'm going to be sharing my thoughts one of the first one's we're introduced to, and how I think it relates to the greater story. Before we take a look at the passage, I think some additonal context is necessary. The text is taken from a speech Achilles gave. He is one of the main heroes in the Illiad and his mother was a goddess named Thetis, but his father was mortal so he is a mortal hero. His glory is recounted throughout the Illiad, and I think the passage really showcases the idea of fate and choice within the myth. The words in the square brackets represent important words in the original ancient Greek transliterated into English. Let's take a look:
s|410 My mother Thetis, goddess with silver steps, tells me that
|411 I carry the burden of two different fated ways [kēres] leading to the final moment [telos] of death.
|412 If I stay here and fight at the walls of the city of the Trojans, then my safe homecoming [nostos] will be destroyed for me, but I will have a glory [kleos] that is imperishable [aphthiton].
|414 Whereas if I go back home, returning to the dear land of my forefathers,
|415 then it is my glory [kleos], genuine [esthlon] as it is, that will be destroyed for me, but my life force [aiōn] will then
|416 last me a long time, and the final moment [telos] of death will not be swift in catching up with me.
Iliad 9.410–416Achilles is saying that if he goes to war with the Trojans then he will never return home alive, but he will be remembered forever. Yet if he does go back home, he'll live a long life, but lose out on the glory. Spoiler alert, he chooses to fight and wins glory for his people. What I think is most interesting about this passage is the way it makes the story feels real and alive. If Achilles choses to go home, then there is no epic tale of glory, of kelos, all of that is lost to the winds of history. Yet, in the story, it feels like Achilles knows the Illiad, the story of his heroic journey, will be remembered forever. He knows we'll be talking about his glory for millenia afterwards. In a way, the text is sort of breaking the fourth wall here, talking to us directly as readers. Telling us that it is because of Achilles that we are reading it today. It makes it feel like Achilles really did make a choice and it really did matter. I love the way the text almost predicts the future.
Anyways that short thought is the thing I learned today and wanted to share. I hope you enjoy it and if it piques your interest, give the course a try! Until next time.
Recently, I've gotten interested in learning formal logic as another one of my hobbies. Formal logic is a way to take an argument, which has multiple premises and a conclusion, and convert it into a notation which allows it to be manipulated/transformed with. I came across this logic guide which recommended Teller's Logic Primer as my first step into the world of logic. The author has graciously made the book free so I've been following along. Today I've read the first three chapters, so I wanted to share a bit about what I learned. Now the type of logic covered in Teller's book focuses on arguments which have a true or false value. Particularly, it starts with sentence logic. Let me show you a bit about how that works. For example let's say we have two sentences:
A: I have a name.
B: I have a pet.
Using these two phrases we could play around with it in lots of different ways. In the first chapter, 3 different ways of manipulating these declarative sentences are shown: conjunction, disjunction, and negation. Each of these has a special symbol to go along with it. You can take a sentence and negate it to get the opposite of it. Let's say we take B and negate it. I have a pet becomes I do not have a pet. Teller shows us that we can translate that into the following symbols: ~B. "~" is the symbol for negation and B is a stand in for our sentence. Negation turns a true value into a false one and vice versa. Notice how we took so many words and converted into just 2 symbols. That showcases the terseness of formal logic. These capital letters used as a stand-in for our sentences are known as "Atomic sentences" or "Sentence letters" in this book. Next, let's look at conjunction. This allows us to take two sentences and bring them together, like we use "and" for in English. So if we brought A and B together the sentence would become: I have a name and I have a pet. Using Teller's notation we'd write this as A&B. For conjunction to be true, both of the values A and B must be true. The "&" is used to represent conjunction. If either one of them is false, the whole sentence is false. Finally comes disjunction, which is works like "or" does. If we applied disjunction to A and B then our sentence would be: I have a name or I have a pet. Using the notation we'd get: A v B. "v" is used as the symbol for disjunction.
These 3 ways to manipulate our sentences are known as truth functions. Just like in math or programming, these truth functions take an input, in this case a value that is either true or false, and return a true or false value. Now that we know the basics, we can start combining We can bring them together and create things like ~(A&B)&B. Give it a try with your own sentences and see how you can play with them using just these 3 concepts.
This was a very basic introduction to what's covered in Teller's Logic Primer, in fact, I didn't even share most of what the first chapter contained! If this note interested you at all, I highly recommend you give it a look. Until next time!
It has taken me a while, but I finally feel like I have a solid grasp on Dijkstra's algorithm so I'd like to share my explanation of how to implement it for it today.
Dijkstra was a computer scientist born in the Netherlands who has made many contributions to the field of computer science. One of these nifty ones was an algorithm he invented/discovered, the aptly named, Dijkstra's algorithm. In essence, this nifty tool allows you to find the minimum cost to each point in a graph which allows you to find the shortest path between two points. Now there's plenty of ways to search for the shortest path, but this algorithm in particular deals with weighted paths. Let's imagine that we're trying to the other side of a city. If we knew what roads would take more or less time, we could find the shortest/fastest path to the other side. Some paths may be shorter physically, but have lots of traffic and so will take longer. We can give a weighting to each path and then use Dijkstra's algorithm to find the shortest route.
So with that rough understanding, let's take a deep dive into how the algorithm works. First, we take a weighted graph and add each point/node to a queue. We will call the cost it takes to go from one node to another the path cost. We give each node two values: the cost to use that node, which we initially set to infinity, and the parent node, which we initially set to null. We now set our starting node to have an initial cost of 0. An important thing to note about our queue is that it must always be in sorted order, with the lowest costing nodes at the very start. Once we grab a node we add it to a visited object. Until the queue is empty, we will take the first element in the queue and grab it's neighbors, which is, the other nodes it's connected to.
For each neighbor we need calculate two values and do one check. The first value is the path cost of using this neighbor added to current cost of the node. We compare that to the neighbor's current cost in the queue. If the first cost is less than the second cost, then it's found a cheaper path to the node. That means we set the parent of the neighbor to be the current node and the cost to be the first cost we calculated above. We do this continuously until our queue is empty.
Once every node's been dealt with we'll have a "finished" object with the minimum cost to get to each point. We can use this to find the shortest path by selecting our end point from our finished object and following it's parent. Then we take that parent and follow along to it's parent. And so on and so forth until we get back to our starting point. With that, we'll have a list in opposite order of the shortest path from the endpoint to our starting point.
With all that information in mind, we can essentially implement Dijkstra's algorithm! It was a challenge to fully digest this algorithm and understand what's going on, but it was definitely worth it. Until next time!
I love reading different blogs and websites. I think it’s amazing how the Internet has allowed us to connect in all these new and interesting ways. Everything changes on the web so quickly, but I really want to keep up with what my favourite sites are creating. That’s where RSS comes in. RSS is a special file format that allows you to stay in the loop with your favourite websites, if they support it. All a site needs to do is have a page where their RSS feed can be found. This feed has a list of some of the site’s latest posts. Tons of different applications support RSS, but I haven’t found a product that perfectly matches my tastes when it comes to how I want to receive content. That’s why I’ve decided to start the adventure of building my own RSS reader web app. What this’ll allow me to do is to collect multiple RSS feeds from different sites and be able to see the latest articles all in one place. I’ve never really dived deep into RSS before, so this new project gives me the perfect chance to do just that!
My first question before I started this project was: how do I keep track of when a feed adjusts? From what I’ve found it seems to be there’s a couple of main methods you need to use to do it. I don’t want to get a bunch of duplicates of an article and I wanted to ensure I wasn’t fetching a feed too often. The answer to these questions lies in conditional GET requests and keeping track of IDs. Each post in an RSS feed can have an ID which you can use to make sure you don’t accidentally add a duplicate. You can also potentially use the URL as an ID as well. A conditional GET request is the second piece of the puzzle. This is functionality that’s built right into HTTP, the protocol which helps you get data on the web. Essentially, you can request a page (like an RSS feed) and if something’s changed, you’ll load the RSS feed, but if everything’s the same, you’ll get an empty feed. How does the server know whether to give you an updated version or not? In your request, you need to provide something called an ETag. You get this tag when you first requested the page. The server will check to see if the tag you provided is the latest one. Based on that it’ll provide you an updated version of the page or you won’t finish your request. This saves time and bandwidth for you and the server.
With that, I know have a better idea of how and when to update my feed reader with new articles. I’m one step closer to starting my project. Until next time!
Today I want to talk about a fascinating concept I just learned about: regression to the mean. This occurs if you get an extraordinary result the first time around, and a the second/future result which is a lot closer to the average or vice versa. Let's say I'm playing a match of chess against a friend. We both have no experience in chess. If my friend were to beat me in chess, knowing what we know about us, we could probably agree that my friend won become of luck. Still, who do you think would win in a second match up, me or my friend? Before I learned about this fallacy, I'd probably say my friend, maybe they just have some innate chess talent I never knew about. If we take regression to the mean into account we'd know that, on average, it's essentially pure luck on who wins and loses, it should be about a 50-50 chance either way. My friend probably has no advantage, even if it may seem like they do. I was just unlucky.
It's easy to believe that my friend would win because they seemed like a better chess player. It's one I'd believe too, but an extraordinary result the first time probably isn't going to be repeated. In fact, it'll probably get worse. To be fair, it is possible that my friend is a chess genius, but it's much more likely that I was just unlucky that first time. I think this phenomenon gives us something to think about and shows a little bit of the beauty of statistics. Until next time!
Well it’s been a week of doing this challenge! For today, I wanted to share a little reflection and keep my learning tidbit a bit shorter than I usually would. I think that writing down what I’ve been learning has really helped make the concepts clear in my mind. For day 1, I shared what I’d learned about applied probability. I didn’t realize how little I had understood until I had to put it down and kept referring back to the lecture notes and other online resources until I had it clear. I think what’s great to see too, is the variety. I’ve written about math, accessibility, algorithms, how to name things, testing and more. I’ve recently a course started on Ancient Greek heroes so once I really get into it, I hope to share what I learn from there too! I’m happy to have started this challenge and I hope I can keep it up for the next 3 weeks. It’s definitely been harder than I thought it would be to post daily, but I feel like it’s been worth it.
For today’s learning tidbit, I wanted to talk about “breadth-first search”. There are lots of different searching algorithms and I’ve never even considered it until I started my deep dive into algorithms. We deal with these underlying pieces of technology all the time without even realizing it, and it’s been great to gain an appreciation of it. As I try to wrap my head around all these different algorithms, one I thought particularly elegant was the one you use for “breadth-first” search. Now this isn’t the be all, end all for searching algorithms. It really depends on what your needs are, but it’s definitely interesting. BFS (breadth-first search) helps us look for a certain point in a graph. A graph is essentially a bunch of connected points, sometimes it’s a two way connection and sometimes it’s only one way. For example, let’s say we have points 1, 2 and 3. 1 is connected to 2 (and vice versa), and 2 is connected to 3. Our graph would then look like this:
Now that we have a grasp on graphs, we're ready to move onto the next step. Before we can understand BFS though, we need to have one more concept to understand: queues. It's not the waiting in line kind, although it is similar. A queue can do two things, it can get you the first element or add an element to the end. That’s it. To start things off, we pick a starting point and add it to our queue. Our algorithms kicks off by getting the first element of our queue, which is the starting point. We now add all the neighbours of that point to our queue. Now we get the first element of the queue and repeat the above steps again. And again. And again, until our queue is finally empty. At that point, we’ve either found the thing we’re looking for or it doesn’t exist. To make sure our queue ends and doesn’t add the same neighbours on and on forever, before we add a neighbour to the queue, we add it to a visited list. Then we check our visited list each time before we add a neighbour to a queue. That way we’re only looking at each point once. That's it! With that, you can implement BFS. Until next time!
Yesterday, I spoke a bit about web accessibility and how I’ve started learning about it. I mentioned the fact that by default when something is updated interactively (using JavaScript) on a web page a screen reader may not realize it. As I’m creating interactive examples on my site, I want to ensure that everyone can access them and realize when changes are occurring. ARIA, which I quickly brought up at the end yesterday, provides the context necessary to our HTML elements in order to tell a screen reader to announce those interactive changes. I’ve integrated an interactive demonstration into my [HTTP Introduction tutorial](/posts/an-introduction-to-http-the-foundation-of-the-web). When readers change values, they can click submit and see the changes appear on screen. In order to ensure a screen reader catches the change, I’ve used ARIA live regions. You add a tiny amount of markup to an HTML element and suddenly, when it changes, those updates will be announced. With ARIA live regions you can specify the urgency as well as which controls relate to the region. Maybe an error has occurred and you want the user to know immediately, or maybe the user received a new chat message and you don't want to disturb what they're currently doing right away. ARIA live regions give you that granularity. It’s really incredible how such a small change can make a big difference in the user experience. I’m glad that I can hopefully help someone else better access my site and hopefully this will inspire others to think about this too!
I want to make sure that my website can be accessed by as many people as possible from wherever possible. That’s the beauty of the web. As part of that mission, I want to ensure that my website is accessible and ensure that no matter what their particular needs are they can use my site to its fullest potential. What’s great about web pages is that they pretty accessible by default. As long as you’re using HTML elements that make sense for what you want them to be , like using an h1 element for your main header and a button for button functionality, then a screen reader, which reads a website out loud, should be able to handle it. Writing HTML in this way is known as semantic HTML. Those little tags you put around text can have a big difference on how it’s handled by things like search engines or screen readers. Embedding that extra meaning, those semantics, is super important in making your intent clear.
Part of what I want to do on my site too is to create interactive examples. I want readers and learners to truly be able to dive deep into a subject area and play around with it. That’s why I’ve been reading into how I can make my interactive examples as accessible as possible. When you click a button on an interactive example, a change happens immediately. This is clear to those who aren’t using screen readers as something will change on screen, but how should screen reader react when you make those changes? That’s where ARIA comes in, which I’ve been reading about at this great blog on accessibility.
In short, ARIA, which stands for Accessible Rich Internet Applications, gives extra information to a screen reader so you can clarify what your intent is. The main document describing it can feel pretty overwhelming, but I know it’s something I want to gain a better understanding of. I’m really excited to learn as much as I can about this wonderful feature of the web which helps open it up to all.
So lately, I’ve been hard at work at making my webmention receiver work so that others can send me comments, likes as well as all other sorts of replies to my posts and notes! My implementation has been tested out with webmention.rocks, which is an AMAZING site which helped me build my version and it automatically tests sending a webmention when I run my program on my computer. This is great and all, but as I continue to add more features and complexity to my implementation it makes me worried that I’ll mess something up in my program with all these moving parts. When it comes to sending a webmention, lots can go wrong! The site may take too long to load, the user may have entered in the URL wrong, etc and all of this needs to be carefully tested. In order to help out with that, I’ve been trying to read up as much as I can to learn more about software testing. That way, I can catch any errors that arise in an automated way instead of checking each individual condition.
Since I’ve written my Webmention receiver in JavaScript, I’ve been using Mocha and Chai to do my testing which seems to be very popular choices. Mocha is the test framework which runs the tests I’ve created and Chai is an assertion library. Essentially, Chai is what allows me to check that nothing went wrong. If something did, Chai will say there’s been an error which’ll popup after Mocha finishes running.
When I first started to think about how I would be testing this mainly server-side application, I thought I should be testing every single method, including the internal ones. Instead thanks to this article, I’m going to instead focus on only the public methods. Internals change and it doesn’t really matter what’s going on behind the scenes. In my tests, I’m checking that things happened like I expected, how they happened isn’t important. The internals are tested implicitly by testing the public methods as what’s public depends on what’s hidden.
With that, I’ve shared a bit about software testing and how I’m going to go about testing my latest project. Software testing is incredibly important, so don’t leave it to the wayside. Until next time!
Today’s day one as I share what I’m learning about! Probability is something I haven’t really thought about since grade school. For this note, we’re going to be learning a bit more about applied probability in plain English. Lately, I’ve learned that using randomness can be incredibly useful in designing efficient algorithms. When I think about algorithms, I think of a series of steps needed to achieve a goal. There’s no inherent randomness in that. Turns out though, that randomness can help make things more efficient. That’s where probability comes in, and it can help us find how likely a certain random scenario is.
What I’ve learned is that there are a couple of fundamental concepts that are necessary to understand in order to be able to look at an algorithm that has randomness in it and be able to find its efficiency.
First there’s the sample space, which includes all the outcomes that could happen. The classic example is a dice roll. If we role a single dice, then the outcome could be any number from 1 to 7. Next, are events which is an outcome that comes from the sample space. An event that could happen is that if we roll the dice, we MAY get a number less than 3. What’s the probability of that event? Well since there’s 7 possible outcomes and 3 of them are included in our event, the probability of it is 3/7! After that, I learned about random variables. This is a value that appears randomly that relates to the problem at hand. Using our dice example, if we roll our dice whatever value that pops up is a random variable.
Our second to last concept to understand is expectation. All that means is what do we expect our random variable to be, what is it’s expected value? The expectation is a weighted average. What’s the average value of our dice? To find that out, we must multiply each value by the probability that it’ll occur and add each of these possibilities together. So to figure out the expected value of our dice, we must find out the values and the probability of each value. In a perfect dice, the probability of each value would be ⅙. That means we’d do the following calculation: 1(⅙) 2(⅙) 3(⅙) 4(⅙) 5(⅙) 6(⅙) = 3.5. That means that the expected value of our dice is going to average out at 3.5!
These 4 initial concepts lead us to our last and most important one: the linearity of expectation. It sounds like a mouthful and this is definitely the one that took me the longest to fully grasp (and to be honest, I’m still trying to understand its implications). This is one we’ve been building up to. Let’s imagine we want to find the expected value if I roll two dice instead of one. We know that the expected value of one dice roll is 3.5 so due to the linearity of expectation, then I know that the expected value of the role of two dice is 7! Why? I can add the expected value of one dice roll to the expected value of another dice roll (i.e. 3.5 3.5) and I get 7! If I didn’t know that, and I wanted to find the expected value of two dice rolls it would look something like this: (1 1)(1/7) (1 2)(1/7) …. etc for each value. Honestly it’s so much work I don’t even want to show it all! The linearity of expectation makes things so much easier. In essence, the linearity of expectation states that the expected value of an event, let’s call it Z, will be the same as if we added up the expected values of each individual event which makes up Z.
There we go! We did it! Those are the main concepts I recently learned about probability. These facts help us get one step closer to being able to analyze algorithms that involve probability and to better understand the world around us. Now, I’m still learning too, so if you see anything that needs to be corrected, my contact info is at the bottom of every page. I hope you’ve enjoyed this taste of probability and are excited for what’s coming up!
I’m not a big fan of New Year’s resolutions, but I do try my best ever year to find ways to improve myself. This year I have two main goals: to learn more and to write more. I feel like there’s just so much out there to learn and I want to push myself to absorb more of it. There’s been lots written about learning in public so I think it’s my turn to give it a try. That’s why I'm going to be committing myself to a challenge: 30 days of writing about what I’m learning. 30 Days, 30 Concepts, about a 100 words each.
The reason I picked an “X days” challenge is because I’ve been amazed by what others have accomplished with this constraints. From things like #100DaysOfCode or how @adacito wrote 100 words for 100 days, it's clear forcing yourself to do something helps you get better. Especially with the added social pressure. Hopefully, one day, I’ll get to writing everyday, but as I work my way up those goals, I want to start a bit smaller. That's why I chose 30 days. I’m determined not to give up halfway through, I truly want to see my goal to its end.
It’s more than just forcing myself to write though. I could probably do that on my own, but I think posting on my site is going to help me with another challenge of mine. I really struggle with knowing when to hit that publish button. Every sentence feels like it can be edited a bit more, every phrase a bit more perfectly crafted. I want to hit every note, make you feel what I want to feel, but knowing when to let go and let your writing off into the wild is hard. With this challenge, I’m forced to publish whatever I have. Of course, it’s going to go through a short, but intense editing process, but the clock will be ticking. That added element of tension is exactly the pressure I need to get things out the door.
So what exactly will I be writing about? Well the beauty of the world we live in is how much content we have. In fact, sometimes it can feel overwhelming how much stuff is out there. I’m going to be covering tidbits from all sorts of topics from computer science and algorithms to math and some history too!
I hope that once I’m done with this challenge it’ll give me even me even more to write about and really help get my creative juices flowing. I plan on writing a follow up post once I finish it. Wish me luck and see you on the other side!