4/19/2019 0 Comments Reading 09: Birth of a NerdI can't say much of my upbringing resembles what Linus experienced. My mom always told me that she knew from the time that I was very little that engineering was the realm where I would be the most engaged. I'm not sure if it had anything to do with the K'nex rollercoaster that I built to shoot marbles across the hall at my sister's bedroom door, or the fairly obvious nerd qualities I possessed (the social awkwardness and the math skills). The key difference was that while my mom knew that I would have rather stayed in my room with the large box of Legos, she wanted us to be engaged with other people, which meant sports and activities. It meant an hour a day on the computer. Maximum. By the time I was 14, that maximum had risen to four hours (largely because I convinced her that I needed more time on my computer to get work done for school). And it meant that, unlike Linus, I had a small group of friends who shared no interest in computers. I think that because of this, I'm looking for more than a connection with a computer and the internet. Linus seems to fit the description of the idealistic hacker. The kid who never left the comfort of his basement where his computer sat in the dark corner. And the kid who was comfortable enough with a computer to do amazing things.
It seems to me that just about everyone in college assumes that at the end of the day, they won't amount to the fancy success story that Steve Jobs or Bill Gates became. They haven't already done the impossible, so the end goal will never be reachable. I'm sure that Bill Gates didn't think he would amount to much when he first started and compared his success with those who had come before him. Linus certainly didn't have a moment where he decided that he would be great. It doesn't all come together all at once, it's just a path like any other. I find it silly that a lot of people today seem to think that they haven't done enough by the time they're 21 to be successful by the time they're 40. As if those 19 years where you slowly hone your skills and develop your unique talent don't contribute to where you want to be. On the other hand, comparing Linus to Steve Jobs or Bill Gates has one fairly significant flaw. While I'll admit both Steve Jobs and Bill Gates are brilliant, they are businessmen at the end of the day. It is the business that made them famous. Linus is a programmer. He's famous not because of how popular Linux is and how many people use it, but because of how good of a program it is. In a way, Linus' accomplishments dwarf those of Bill Gates and Steve Jobs. Apple is best know for it's sleek design and ingenuity with design. Microsoft is know for it's dominance in the market. Linux is known for being the dominant backbone the cloud. It's not marketed or advertised. For the most part, the average computer user only vaguely can point to Linux as being an operating system. Sure, it's cool that Linus was buried in his computer from the moment he was first introduced. And in hindsight, he seems like the typical nerd who blossomed into a sophisticated programmer. But his childhood didn't define his future and the fact that he's good at what he does has little to do with the fact that he was obsessed when he was a kid. The important part is that he was obsessed. Passion for computers drove his path, and even though I wasn't burning the midnight oil working out just how a computer reads I haven't eliminated the possibility that I can have that same passion. In hindsight, everyone's childhood seems as though it was an integral piece of getting them where they needed to go. But you don't need to have build a nuclear reactor in your basement by the time you were 14 to do amazing things. 4/5/2019 0 Comments Reading 08: The Magic CauldronThis week we talked a lot about the possibility that an open source project has to generate positive cash flows. Of course we can understand how large open source organizations (such as the Linux Foundation) can generate money without selling their product. It comes in the form of donations and selling various products to promote the name. Upon first hearing about this, it occurred to me that this model is very similar to how many famous YouTube stars make money. Top channels sell name brand t-shirts and coffee mugs, various songs written for the channel, and different receive donations from subscribers. However, while it is important to note that the top YouTube stars can make a lot of money by broadcasting their lives on the internet, just because one person can make millions on the internet, it doesn't mean the average person makes that much money. I envision a lot of the same things happen in the open source community. Large foundations can effectively generate money despite selling their product for free, but small projects don't have the same capabilities. Donations come from having a product people value enough to donate money to. My dad is willing to donate to Wikipedia because they have a track record of being a service he can rely on. A small project doesn't have the history or the exposure to elicit large donations. It isn't impossible for these foundations to make money, but just because it's possible doesn't mean it's likely.
Eric Raymond justifies the small profit margins by pointing out that many programmers do not produce packaged software (or software which stands alone). The one major exception to this rule being video games. At my previous internships, my role was to develop software which worked with various hardware components (the engine and brakes of a locomotive). Sure, the software I developed was not going to be sold independently to locomotive manufacturers. But, developing a hardware system which utilizes proprietary software to maximize the efficiency and the ensure the safety of the locomotive is just as much software as an iPhone app. The software I developed was going to be bundled into a hardware product and sold, but that doesn't invalidate the contribution of the software component to the final product. Maybe I don't buy software independently, but when I bought my computer, some part of that contributed to Microsoft for their operating system and any other applications which were included in the total price. The idea that it is acceptable that open source has such small margins because software doesn't independently generate income doesn't make a lot of sense. Apple doesn't manufacture their laptops or iPhones, they purchase them from a manufacturer and add their software. That software makes the product desirable for customers. Even if iOS is sold with the phone and updates are free, I didn't purchase an iPhone because I really value the camera. I spend that absurd amount of money to buy the phone that comes with the software I want. All that being said, there is value to the open source business model. I don't expect to pay for updates for apps or operating systems (even though this used to be a practice in the past). A company that specializes in pure software development cannot compete with other companies if they are forced to charge for the use of their software. It doesn't matter how much better the software is, the free software almost always wins out. The open source community enables small ideas to be competitive in a dominant market. Innovation is more possible because of this model. It isn't the most profitable for these small open source companies who are selling software for free, but it forces companies to focus on feature development rather than reduced cost as the determining factor. Promoting that innovative market is crucial for the development of technology, which is why many large technology companies are eager to donate to 3/30/2019 0 Comments Reading 07: The NoosphereI decided pretty early on that my goals consisted of learning engineering now and in the future venturing into the business aspects of a company and progress up the ladder. As a result, my minor is in Engineering Cooperate Practice. Admittedly, I've only taken a few business classes and cannot be expected to fully understand all the concepts involved, but I struggle to understand how open source fits into the traditional business model (at least not in a direct way).
In a sense, open source developers are contractors working on specific pieces of code that can be used by many other large businesses. But these contractors see no monetary benefit for their work. Large companies (such as Google) are free to use this developed code by simply giving credit where it is due. Open source developers may be pretty close to "true hackers" in this respect, they aren't developing for the money or necessarily fame (that may come with developing influential software). Instead, they want to program and demonstrate their skill amongst their peers. The only people that see the credit on the code are those who worked on the code or those who integrate it into their software. These programmers only want their peers to know that they helped to develop this. In ESR's list of taboos in the open source community, he includes: Removing a person's name from a project history, credits, or maintainer list is absolutely not done without the person's explicit consent. This speaks to the idea that open source programmers are "paid" for their contributions in recognition. In this sense, the community could survive (although not really as a business). Economics will tell you that the basic form of a business is a group of individuals who produce goods and services in exchange for (typically) money or other goods. It is possible to match open source to this definition. Code is produced and developed in exchange for recognition. That being said, recognition is not a sustainable good to live off of. Programmers develop with the understanding they won't be paid for effort. Understandably, this community is restricted to those who have extra time (beyond their jobs which make money). Those who can't afford to not be paid, or even those who don't have a computer or internet, cannot participate. The idea behind paying individuals for their contributions stems from the idea that individuals would rather be doing the thing that provides them with the most benefit. This is not necessarily monetary. Individuals who choose to develop open source do so because it provides them a benefit to have contributed in that project. The issue arises when a programmer who derives benefit from having contributed also has the option of working to derive a monetary benefit. The open source community can provide no incentive to get them to stay. I believe the open source community is sustainable, although not for individuals. The community itself can continue to thrive (as long as code is properly documented) and be passed down to new programmers. However, it cannot be sustainable for an individual. Some individuals don't need to worry about money and some can dedicate part of their time to a job and part to open source. But a large number of people cannot thrive without an income. The open source community works because it can deal with the constant change (not necessarily on the highest levels) of programmers. It can continue to thrive because of this. Additionally, businesses (such as Google) want open source to remain because it provides them a benefit, and thus will help with funding for different projects (such as the Linux Foundation). The concept of the cathedral and the bazaar (in the context of software development) are two different models for how software should be produced. The first is the cathedral. This is the idea that software is produced for a specific purpose. An individual designs and thinks of requirements that software should have and then the group of programmers designs the software to match up with this original design. The bazaar is different. This is the idea that software is built in small pieces by different individuals which create features that individual users may like. Together, each of these small pieces combine to form the complete software which supports many different features. The two concepts are nearly polar opposites; the cathedral is taking the design and programming to match that design, and the bazaar is programming small pieces and building up to make a complete design. Of course, there are situations where each one can be very helpful. For instance, when designing software to have a number of different tools to support different situations, it may be helpful to have a more bazaar-like development model. Different individuals could be more effective implementing small pieces as opposed to one individual trying to micromanage how the software should best implement each particular piece. Large companies work better with a particular design matched to the needs of the desired customers (such as how Word is designed to simplify a high end text editor). This way, they can determine what an end user wants and then design software to match that purpose.
As we learned early in our computer science education, part of the Unix philosophy is to do one thing, and do it well. In the open-source (or bazaar) community, this is where software thrives. Each piece works as smoothly and efficiently as possible. By having a smooth way of linking pieces together, software can contain a number of small pieces of really powerful code. Together, all the pieces fit together to form software containing a number of great pieces that do their tasks well and work together to create cohesive software that addresses a number of different issues which users may want. Where can this go wrong? Some software tries to do too much. Consider Linux. While the computer programmer will rave about how great Linux is, it should be significant to note that just about nobody outside of programmers has used Linux. There are so many features that make it really useful for a computer programmer. But not for the average person. I love Linux because it allows me to use the terminal so seamlessly with the OS to interact with the various directories. It contains all the useful GNU software to write and run programs. But most people don't use their computers for this. As a business venture, Linux is not an OS that the average person wants. They don't use the command line so it doesn't make sense for that to be an integral part of the functionality. While all of these features are great, most people just want their OS to be simple and easy to use (which is why Mac OSX is so popular). Open-source software molds itself to be great at working for those that use it (generally programmers) who are looking for all the specific details. Despite how effective the bazaar programming model may be, it is important to note that centralized ideas work well to isolate which functions are important for users, and which users may not want. Removing unnecessary functions will streamline the functions that do work and make the software easier for end users to use. That being said, open-source software is really useful to match the sometimes rather particular needs of software developers. They have many features which allow very specific uses, which often suits the needs of programmers. It just doesn't necessarily support the needs of the average person. 3/1/2019 0 Comments Reading 05: StartupsIt's nice to believe that the same path to wealth exists for everyone. Paul Graham made his money on a startup, so he imagines that the same path to riches exists for everyone with an idea and a skill can make millions by selling their idea. He describes startups as a "reliable way to get rich". His translation of the average salary of a skilled computer scientist to one working at a startup isn't translated in realistic numbers. Suppose the programmer does puts around $3 million of programming effort into his job and his hours, he isn't being paid that. Effort doesn't translate to income. For the most part, our salary is not based on the amount of effort we put into our work. I won't be getting paid based on how much work I get done, I would get paid the same amount if I did the bare minimum compared to the most work possible. The idea that effort translates to wealth isn't accurate. This may have been true when programmers received a royalty for software that was sold, but this is not true anymore. Startups are an example where this is particularly true. Even though there are less individuals at a startup to distribute the share of money to, startups take a long time to be able to make money. When they do, it becomes necessary to invest that money back into the company to continue generating money. Startups are known for being risky decisions. While they can produce wealth, they can also be huge money pits where individuals spend all their savings and end up producing a product that never makes a splash in the market. Paul Graham is biased because he was lucky enough to make money in this market instead of losing it. But just because he was able to make money doesn't mean he is correct to say it's the best way to reliably make money. It's not reliable unless you can predict what the market wants, which is essentially predicting the future.
Paul Graham's experience also provides him with a distorted view of how wealth is distributed. He believes that the distribution of wealth within a big corporation makes sense because the market determines the value of the CEO in the company. But this doesn't account for the market overvaluing the impact of a CEO in the company's success. Economists don't claim that the market is a perfect predictor of value. In fact there are many instances of market failure where economists suggest the government interfere to ensure that the proper value of goods and services are determined by the market. Graham assumes that the best market is the one where nobody interferes. It isn't correct to just assume the market correctly valued the CEO in a company. The idea that a CEO gets paid absurdly more than the average person in their company shouldn't be based on the assumption that the market correctly determined the value of a good CEO to that company. Additionally, if people are paid by the value they provide to the company, it doesn't explain why engineers get paid so little when they're responsible for creating the products sold by the company. Graham's essay, "Mind the Gap" seems to find a justification why it's okay that some people are paid hundreds of times more than others. He describes how technology enables the rich and the rest to live very similar lives. He says, "A hundred years ago, the rich led a different kind of life from ordinary people. They lived in houses full of servants, wore elaborately uncomfortable clothes, and travelled about in carriages drawn by teams of horses which themselves required their own houses and servants." Today, the very rich still live in mansions filled with personal staff, wear expensive designer clothes to show off their wealth, and travel around in private jets. I'm not really sure the rich from the past are that far removed from those today. The idea that the rich exist on the same level as everyone else is delusion. The rich have access to more opportunities. They don't die from curable diseases because they have no insurance. They don't need to work long hours just to have enough money to afford a roof. The idea that technology has somehow transformed the societal structure is false. We still have the rich and the poor. Technology improves life for those who can afford to spend money on it. Graham uses cars as an example. Many years ago when cars were brand new, only the rich had them. Now that technology has improved, cars have become cheaper and average people can afford them. The same thing happened with TVs and computers. This doesn't suggest that there aren't new technologies now that only the rich can afford to spend money on. Not everyone is flying around in a helicopter. 2/22/2019 0 Comments Reading 04: Programming LanguagesWhen I first started programming, it seemed to me that learning just one language was difficult enough. I had never considered that I would ever be a competent programmer in more than a few languages. I actually have a friend at another school who was only taught to program in java. Actually, based on my experience at my internship, the majority of people really only know one or two programming languages well enough to be considered competent. And yet, the idea of multiple programming languages is the specialization of tasks. For instance, Python is really useful for readability and functional programming, whereas C (a much less readable language) benefits from the fact its a lower level language and thus has more power over the computational power of the system. The choice of a programming language based on its capabilities rather than the general competence of the programmer in that language is an important step in starting a project. Why would you want to spends hours working on creating code in C that does the same thing as Python code, but could be written in a third of the time?
In the reading, Eric Raymond's essay about programming languages suggests that all programmers should expose themselves to a number of languages to broaden their skills. He suggests learning Lisp, even if there is never any intention of using it, because the "experience will make you a better programmer for the rest of your days". I agree with this to a certain extent. I do think my experience with different programming languages has given me a better idea of the capabilities of a computer. Learning C was useful to me because for the most part I've worked with C throughout my internships (because the language is more secure than a scripting language like Python). But learning Python was also useful because the simplicity of programming in Python enabled me to think more abstractly about how the program worked rather than focusing on functionality. At the end of the day, the different programming language doesn't significantly affect the outcome of the code (although lower level languages generally have less overhead). If the language doesn't change the outcome, it makes sense to program in a language that is easier to read and use. It is important to make the choice of a language that considers the future of the program. Some new languages don't have a lot of support for new functionalities, which would limit the growth of the program. Additionally, if the language doesn't last (it loses popularity), then the code becomes nearly impossible to change or edit in the future. If I were told to work on some old code written in a language I had never heard of, it would take me significantly longer just because I would have to spend time working on picking up basic syntax rather than just knowing the basics of a standard language that can be applied just about anywhere. Sure, that obscure language is really useful in accessing a particular service, but it loses its convenience when the language becomes forgotten history. The choice of a language shouldn't be an implicit decision made because a language is more popular. That's the same argument when supports the use of Java all over the place, despite it being a language which involves more overhead and complexity than just about any alternative. Java is the most popular, why isn't it the best choice? Choosing some random obscure specialty language will run you into the same problems where not enough people even know anything relating to the language, so trying to get started will involve all sorts of complexities, and still only a few people will really know the language well enough to competently program in that language. Why intentionally create a lot more learning in the process when it can be done faster in a slightly more common language that has the capability to do the same things as that obscure language nobody knows. 2/15/2019 0 Comments Reading 03: Nerds as HackersSteven Levy's version of a hacker is an individualistic and driven individual. They have a driving passion to create and explore the different technologies. But his combination is incredibly rare. His "true hacker" is an individual who is driven to create regardless of the money or the fame to be earned by doing so. They are inspired by pure creativity. Personally, I think there just about nobody who wants to play with a program 'just because'. I mean, I love programming, but there's a realistic element which drives me that isn't just pure passion for computer science. There are elements of a "true hacker" which I embody, but I also match up with Paul Graham's version of a hacker.
Paul Graham describes an individual who strives to gain intelligence and because of that is socially isolated. One who can connect better with a program of a machine than with most humans. And Paul Graham's hacker is one who doesn't care that they don't fit in the universal box the world wants to put them in. They are great because they differ from the norm. These individualistic characteristics resemble those which Steven Levy attributed to the "true hackers". The bigger difference is that Steven Levy was more particular about those he considered hackers. Levy considered all those who's intentions were less than pure love of programming to be less than true hackers. Graham is different. He doesn't dismiss individuals as less of a hacker even if they may have different interests (beyond programming). In my opinion, Levy's version of a hacker is exclusionary. I consider myself to have a passion and drive to learn more about computers and become a better programmer. But it would be wrong of me to say that it wasn't the only thing I ever wanted to do. In those ways, I would not be considered one of Levy's hackers. Honestly, Graham's version of a hacker seems pretty close to how I would describe myself. I was never the popular kid in high school, perks of being two years ahead of just about every other person in my grade in math. I was the kid who people cheated off of in chemistry. And for the most part, I was happier being invisible to just about everyone. My one difference in high school (which placed me above Graham's unpopular nerds) was that I played sports (which would theoretically bump me a bit up the totem pole). As sad as it sounds, programming has been one of the parts of college that is unbelievably fun. It mixed what I had always seen as a weird obsession with math and a love of puzzles. Levy's description of a hacker always made it seem as though the "true hacker" doesn't even care about anything beyond programming. Graham claims that it's not that they don't care about it, it's just not one of those things that comes easy. Graham's version makes a lot more sense to me because it lines up with my experience as the socially awkward nerdy kid in high school. 1/31/2019 0 Comments Reading 02: Game HackersNowadays, we are overwhelmingly used to computers being used equally for gaming and for traditional computing work. Some individuals use their computers exclusively for gaming, and there is a market for laptops who specialize in these types of games. But the original MIT "True Hackers" likely never imagined that computers would be used in this way. Despite these MIT hackers using computers beyond their original purpose, they likely never imagined the computer would evolve to be used by those who didn't care about the computational power or even about the code for the game they were playing. The concept would be similar to somebody turning a calculator into a video game that had nothing to do with math. It was completely against the purpose and it's use wasn't related to the programming that the "True Hackers" love so much. I imagine that the "True Hackers," who were almost offended by the use of BASIC to simplify programming, definitely would not have any love for those who use computers to play games. But, as much as these hackers would have hated gaming entering the industry, this was the first taste of the huge amount of money that could be made from computes.
It makes sense to me that as the industry became more well known, it became evident there was a lot of money to be made. Thus, it brought forward those who were less interested in programming for the novelty of it more interested in the money that was available. The "Hardware Hackers" were the first to touch on this by bringing the personal computer into the market, but even these hackers intended computers to be used as a programming tool. The new "Game Hackers" were different because their product wasn't a tool. It was a packaged product. What's more, these finished products begun to get locked, so the user could only see and edit the outside shell of the programming. They couldn't edit the code or alter the game in any way. This is a direct contradiction from the traditional viewpoint of the "True Hackers" that all code should be able to be accessed by all because it is this collaboration which allows the programs to grow and develop. The idea that code would be hidden and there would be no access to the original source code is absolutely against the Hacker Ethic. The one consolation is that the widespread nature of gaming has spread computers beyond what they likely would have been capable of a tools. For instance, there is a large difference between those who are exposed to a video game and get a computer as a result (thus being exposed to computers), and those who are exposed to electronics kits and circuit builders (thus exposing them to electronics and engineering). A lot more people are exposed to and benefit from the exposure to a computer, even if it is through the guise of a video game. I see video games as a sort of "gateway drug" to programming. In this way, while consumerism and the desire for money rather than passion drowns the gaming market, the programming world (for those who venture beyond gaming) is still open and free just as the original hackers (more or less) intended. In fact, while some games are very restrictive and prevent the user from experimenting beyond the games boundaries, there are many more which encourage creativity and have thriving online communities which seek to push beyond the intended use of the game. One powerful example is Minecraft. While the game can be played strictly 'within the lines', there are communities adding personal modifications to the game that can act as a gateway between gaming and programming. While the "True Hackers" may be disappointed because today's 'hackers' are less interested in the pure machine and the primitive code associated with it, the hackers today are still driven by their interest in programming. And while the programmers who love what they're doing may not necessarily be the best as the technical aspects of programming, they have a stronger desire to program and have more ability to be a better programmer. The huge expanse of programming beyond just the technical aspect into different industries enables those with a talent for programming to work on a system they have a passion for (even if their passion is not programming but rather for the system their working on). In my opinion, a programmer who isn't interested in their work (even if they're technically the best suited to do the work) won't do as good of a job as a programmer who loves the job. The programmer who loves the job will invest more time into the perfection and excellence of their work. |