“This code holds an important place in history and is a fascinating read of an operating system that was written entirely in 8086 assembly code nearly 45 years ago,” the blog post remembered. The release includes not only the source code for MS-DOS 4.00 but also additional beta binaries, documentation in PDF format, and disk images. These materials have been preserved and made accessible thanks to the efforts of internet archivist Jeff Sponaugle and the guidance of former Microsoft CTO Ray Ozzie.
This version of MS-DOS, developed in collaboration with IBM, has a complex history, featuring contributions to what would eventually evolve into OS/2. Notably, this release includes early, unreleased beta binaries discovered by a young English researcher, Connor “Starfrost” Hyde, who found them among Ray Ozzie’s collection of software.
Microsoft’s Open Source Programs Office (OSPO) explored the possibility of releasing the source code for MT-DOS but eventually focused on MS-DOS 4.00. Although the full source code for MT-DOS was not found, the release of MS-DOS 4.00 represents a rich piece of computing history, showcasing an era when operating systems were written entirely in 8086 assembly code.
The released materials can run on hardware as old as an original IBM PC XT and as recent as a Pentium, and are also compatible with open-source emulators like PCem and 86box, allowing enthusiasts to explore this vintage software in a modern setting.
The initiative underscores the value of digital archaeology in preserving and understanding the technological advancements of the past. Microsoft and IBM’s collaborative effort highlights their ongoing commitment to sharing important historical artifacts with the public and contributing to the educational and technological community.
]]>According to a company blog post, the move is in line with the team’s desire to let users and developers shape the future of artificial intelligence in Windows Terminal by fostering a collaborative environment for innovation.
Terminal Chat, which is currently available in Windows Terminal Canary, allows users to have conversations with an AI service directly in the terminal. This feature enables users to receive intelligent suggestions, such as searching for commands or understanding error messages, while maintaining the context of their terminal session.
The current implementation of the Terminal Chat feature in Windows Terminal requires users to furnish their own Azure OpenAI Service endpoint and key, as it lacks an integrated large-language model. Those keen on utilizing Terminal Chat can locate the corresponding code in the feature/llm branch of the Windows Terminal repository on GitHub. Furthermore, the most recent build of Windows Terminal Canary, inclusive of the Terminal Chat functionality, can be obtained by downloading from the GitHub repository.
Configuring Terminal Chat in Windows Terminal Canary involves the manual addition of an AI service endpoint and key to the Terminal Chat settings. Currently, Terminal Chat exclusively integrates with the Azure OpenAI Service. To acquire the essential Azure OpenAI Service endpoint and key, users must create and deploy an Azure OpenAI Service resource.
]]>Buck2 can be accessed from GitHub or from the Buck2 website. The system can build software written in any language, and can build software written in many languages simultaneously. For example, if developers have a Python binary that imports a Rust library that depends on an OCaml library that depends on a C file, Buck2 can help them, Meta says.
For each language, a generic rule should be written that describes what it means to compile in that particular language and how standard features such as testing, starting, and linking with C are provided. Buck2 ships ready-made with rules for Assembly, C/C++, Erlang, Go, Haskell, Java, JavaScript, Julia, OCaml, Python, and Rust. Developers can use the Starlark scripting language, a dialect of Python, to add or re-implement language rules to Buck2.
Buck2 is a completely rewritten Buck, with a separation of kernel and language-specific rules, increased parallelism, integration with remote execution and virtual file systems, and revised console output.
Buck2’s kernel is written in Rust, while language-specific rules, such as how to build C++, are written in Starlark’s Rust implementation. According to Meta, separating the language rules from the kernel makes it easier to change and understand the rules. A single dependency graph powers the build system, eliminating many types of errors and improving parallelism, while the rules API is designed to offer advanced performance features.
Choosing Rust offers advantages such as no breaks for garbage collection, while Java, which Meta used when writing Buck1, offers advantages such as better profiling tools, Meta says. The Buck2 binary is language agnostic.
]]>Kubernetes 1.22, released August 5, 2022, contains the following new and updated features:
Other changes in Kubernetes 1.22:
It’s Excel, and it’s running on your laptop.
Excel is “the most successful programming system in the history of homo sapiens,” says Anaconda CEO Peter Wang in an interview “because regular ‘muggles’ can take this tool…put their data in it…ask their questions…[and] model things.” In short, it’s easy to be productive with Excel.
Superior ease and productivity: This is the future Wang envisions for the popular Python programming language.
Although Excel has succeeded without open source, Wang believes Python will succeed precisely because of open source.
Software, in short, is always a process and not really a product.
Open source was early to clue into this fact. Wang says, “What open source does is it opens the doors. It’s like the right to tinker, the right to repair, the right to extend.” In other words, open source embraces the idea of software as a service—as a process.
More important, this means that open source encourages more people to participate in its creation and success. With most softwares, Wang estimates that 90% to 95% of users are left out of the creation process. They might see the demos but they’re trusting others to deliver software value on their behalf. By contrast, “open source for data science has become so successful because a whole new category of users got turned into makers and builders,” Wang says.
]]>Research by the software company Aiven shows that almost all software developers see open source as “the future of coding”. Their expectation is that OSS could become a regular part of their organizations. In fact, it has already become in many companies, including a lot of IT giants. Aiven claims that OSS brings “twice as many” benefits in comparison to the negatives.
What is an open source software?
It is exactly what it says on the tin. Open source software is completely open. Anyone can inspect, modify, or change the programming code. If you come across an application which has been created with an open source code, it means that it is published and accessible to check or modify. This is not something that usually matters to the end user, but it has great importance to software engineers and companies. Having access to the source code means that they can verify that everything is fine and that there are no backdoors or other potential issues in relation to the software.
An access to the source means the freedom to modify the said application in a way that is suitable for the organization’s needs. It is also an opportunity for the community to further improve the project, correct bugs, add new features and so on. If not, it is a chance for an IT specialist to use it with an opportunity to learn and improve their own skills.
An important point to mention is that an open source does not necessarily mean “free” or “do whatever you want”.
While both are possible options, whether they are available depends on the original creator of the project. OSS usually comes with an “open source license”, which is a clear definition of the rules that anyone who is using the code is obligated to follow. These rules can vary depending on each project.
In most cases though the rules are straightforward. Usually they state that the modification and the distribution of the OSS is allowed freely within the scope of the laws. The main desideration when modifying and distributing is that the rules has been followed, and the same requirements has been retained. An example is if it is free and you modify it, the modification should also be free and open source. In some cases, attribution may also be required.
Open source licenses mostly aim to preserve the OSS DNA and the basics. A comparison between the licenses and the Creative Commons licenses, which are used for online content could be made. As long as you are not trying to copy someone else’s code and then pass it and sell it as your own, you should be good.
Why OSS matters?
Initially it may seem that using OSS instead of a “closed” software is counter intuitive. Why would you want anyone to be able to access the source code and in turn find loopholes to hack it. Unfortunately, this would not be the best option.
When it comes to proprietary software, only its author and authorized partners would be able to copy, inspect and modify the code in any way. This means, that users and clients can only report issues but not try to fix them. Fixing these issues could be time consuming. In case of an old software and out of its support cycle, then tough luck, you will have to live with that fact. Or the vendor can decline to fix the issue for whatever reason.
By using an open source software, you would no longer have to worry about any of that. Even though multiple experts from the community have checked the source code, anyone else who wants to use it can also verify it themselves. If there is an issue or a bug, usually it is solved much faster as there are much more capable people who can make it happen. Or… you can simply have your IT team fix it. In fact, you can take advantage of the provided support in relation to the software for as long as you want.
Open source software has greater implications. OSS also brings benefits at a global scale as a lot of the world’s most popular and important technologies rely on an open source. Among them are the Linux operating system, the Apache Web Server application, and more. Technically even Android is an open source, but there are certain nuances that impact its availability as an open source project and as the complete platform with Google’s services added to it.
A lot of the cloud computing services, features and technologies are also open source based. As a whole OSS is growing in popularity even in enterprises and governments. It ensures greater transparency and can be cheaper for maintenance.
More open source code pros and cons
The Aiven report also points out a few more detailed benefits of OSS. For example, the fact that open source users depend significantly less on specific vendors, thus vendor lock-in is not a worry for them. Transparency, quick bug fixes and the ability to add unique features are also pointed by top OSS benefits by the 200 British developers who took part in the report and work in cloud and database-related companies.
Of course, it would not be fair to only focus on the positives. There are some negative sides of open source code, too. They are mostly related to the ease of use and practicality of the software. Propriety software developers invest quite a bit of effort and money into making sure their applications are (in general) easy to install and with intuitive interfaces. OSS on the other hand is mostly focusing on function and the usability can suffer which in turn can make it difficult for employees without additional training.
This leads us to the potential hidden costs. Yes, the software itself can be much cheaper, even free, but if you have to spend additional money for training, configuration and troubleshooting, then it may get a bit expensive. On the other hand, this is a risk that can be avoided if the company does careful planning and research new software and changes in detail before implementing them.
OSS fans are adamant that the benefits outweigh the negatives by a sizeable margin. It gives companies the flexibility they need to achieve their goals.
Open source is here to stay
A lot of new projects and technologies now are being built from the ground up with open source. Networking and Internet platforms are among them, too. Robbert Hoeffnagel, European representative for the Open Compute Project Foundation for example has said that open source will be a vital part of the data center.
He gives three specific reasons. The first one is the network effect – everyone can start an open source project which means a huge field for new ideas and opportunities. But there are also now plenty of IT giants and big names, join and are joining the open source world. As a result, everyone sees that it is an approach that has the support of big names and this brings even more awareness and trust to it.
Next, we have the agility. Open compute projects (OCP) allow the creation of task-specific hardware which is optimized for that goal. So, you can have themed data centers and features, tailored for very specific loads. Open source-based hardware also means less risk for vendor lock-in.
Finally, the costs. OCP hardware can help lower the costs when it is time for renovations, expansion or even building a new facility, Hoeffnagel says. He argues that when you compare the costs of an OCP data center to a traditional one, the benefits will be clear. That would probably depend a lot on the type and goals of the data center, too. It would though for sure be much easier to incorporate new developments and they come at a very fast rate in the open source world.
What do you need to use open source?
As you can see OSS is everywhere these days. For any propriety software out there, there’s usually at least a couple of OSS alternatives available, too. Sometimes they are not as good as the licensed software because they lack the specific patented code and technology but are still decent alternatives. Often though you can get the same effects and results with OSS.
It is obvious that open source is here to stay. So, you might as well start to incorporate it into your workflow if you have not already. If you want to use open source software all you have to do is find it, download it and install it. But if we are honest, that is not enough. Open source is about the community. That is why platforms like GitHub and the likes are so popular. It is about participating in other’s projects, too. Making contributions to other projects is a vital part of the open source community.
There are plenty of options. You can test code, verify code, write it, and add to a project, create features, fix bugs, perform regular maintenance or write the documentation. Basically, you can do anything that you are interested in or want to add to your skillset. It is a great way to improve your coding abilities, create new connections and opportunities, too.
]]>And yet, programmers rarely see the spotlight in the way that famous celebrities, CEOs, and other public figures do. Here are some of the most influential programmers of all time:
1. Larry Page
Larry Page, along with his co-founder Sergey Brin, invented Google. Today, Google continues to serve as the world’s leading search engine, as it makes the internet easily accessible to billions of people around the world.
Interestingly, Google PageRank is named after Larry Page, since Page was the programmer who created the innovative PageRank search engine algorithm.
Now, as CEO of Alphabet (Google’s parent company), Page continues to oversee cutting-edge companies that revolve around human aging, Artificial Intelligence (AI), self-driving cars, and much more.
2. Dennis Ritchie
Dennis Ritchie was an American programmer, founder of the C programming language, and co-developer of UNIX.
The C programming language is efficient, portable, and powerful. It was invented sometime between the years 1969 and 1973, at Bell Labs. Best of all, C is still one of the most popular programming languages today, as it’s commonly used in embedded hardware programming, open source software, systems programming, 3D movies, and more.
Since just about everything on the web uses C and UNIX, Ritchie’s contributions to the world of programming are immense.
3. Bill Gates
Bill Gates is a computer programmer and co-founder of Microsoft, which is the largest software company in the world. Together, Bill Gates and Paul Allen revolutionized the world of software and computing.
Despite being on track to earn a net worth of over $1 trillion dollars, Gates continues to invest billions of dollars in some of the world’s most important philanthropic causes, like improving global access to healthcare, and reducing extreme poverty.
Gates is ultimately one of the most well-known and generous programmers in the world.
4. Mark Zuckerberg
While Facebook is a topic that usually sparks some debate, there’s no doubt that Mark Zuckerberg changed the world forever when he invented the world’s first hyperconnected social network.
Through Facebook, billions of people are able to communicate with one another free of charge, regardless of one’s geographic location.
This is quite an amazing feat that has improved global communication and connectivity by a large margin.
5. Ken Thompson
Ken Thompson, who is often considered one of the pioneers of computer science, designed and implemented the original UNIX operating system. Today, UNIX and its variants continue to run on smartphones, supercomputers, military systems, global banking networks, and more.
Thompson, along with Dennis Ritchie, also re-wrote most of UNIX into the C programming language in 1973, which made development and porting significantly easier. Additionally, Thompson went on to create Belle, which was the first machine to achieve master-level play in chess.
Overall, Thompson is a programming legend who has made life significantly better for programmers everywhere.
6. Linus Torvalds
Linus Torvalds is a Finnish-American software engineer and creator of the Linux kernel, which became the kernel for operating systems like Linux OS, Chrome OS, and Android. Torvalds also created the version control system Git.
Torvalds believes “open source is the only right way to do software,” and has won numerous awards for his contributions in the technology arena. He’s an exceptionally talented and influential coder.
7. Satoshi Nakamoto
Satoshi Nakamoto is a bit of a strange case, because there is some uncertainty about the true identity of the pseudonymous Bitcoin founder. And, there are still many questions about the future that Bitcoin holds.
But as of today, with the world-changing trajectory that Bitcoin is headed towards, it’s safe to say that Nakamoto is a programmer who has already impacted how financial transactions will be conducted forever. In addition to designing Bitcoin, Nakamoto also created the first blockchain database.
8. Ada Lovelace
Ada Lovelace was an English mathematician, and the world’s first computer programmer. She was born in the year 1815, and eventually recognized that the Analytical Engine could be used for purposes beyond just crunching numbers.
Lovelace examined how technology related to humans and society, and then went on to create the first algorithm that could be used by the Analytical Engine.
She was truly ahead of her time, and had a tremendous influence on the history of computers.
9. Tim Berners-Lee
Tim Berners-Lee is the inventor of the internet. He imagined an open platform where people everywhere could freely share information, access opportunities, and work with one another despite geographical limitations.
In many ways, Lee’s vision has come to fruition, as the internet has become an amazing place where programmers are free to collaborate on any projects that they like. Thanks to Tim Berners-Lee, the world wide web continues to provide abundant opportunities for web developers, game programmers, and people from all walks of life.
10. Alan Turing
Alan Turing was a computer scientist, mathematician, logician, and creator of the Turing machine, which simulates computer algorithms. The Turing machine played a vital role in deciphering codes used during the Second World War, and therefore made Alan Turing one of the most important figures of WW2.
It’s for this reason that many people actually consider Alan Turing to be the greatest hero of WW2, and the “father” of modern day computing.
Today, Turing’s name lives on through the Turing Prize, which is the highest award that one can achieve in the field of computing.
]]>Deno Deploy was released as a first beta on June 23, with a series of beta releases set to follow. General availability is planned for the fourth quarter of 2021. A multitenant JavaScript engine running in 25 data centres worldwide, from Taiwan to Montreal, Los Angeles, and London, Deno Deploy integrates cloud infrastructure with the Google V8 virtual machine, allowing developers to develop locally and deploy globally.
Built on the same systems as Deno CLI, Deno Deploy is free for developers to use during Beta 1, with users able to sign up to use it via GitHub. During the past eight months, Deno’s developers have been designing the hosted service to supplement workflows with the open source Deno CLI.
]]>Oracle GraalVM Enterprise Edition 21.1 is based on Oracle JDK version 1.8.0_291 and Oracle JDK version 11.0.11. GraalVM Community Edition in 21.1 is based on OpenJDK version 1.8.0_292 and OpenJDK version 11.0.11.
GraalVM 21.1 introduces new experimental binaries based on JDK 16 for both Enterprise and Community editions based on JDK 16.0.1.
The experimental status means that all components in the JDK 16 based binaries are considered experimental regardless what their status is in other distribution versions.
The JIT mode for Java applications is perhaps the most tested capability for these builds, so if you are interested in running your Java applications with the GraalVM compiler or are currently using the JVMCI compiler in any other JDK 16 OpenJDK builds, consider trying out the GraalVM binaries. These include the latest OpenJDK changes and the latest GraalVM compiler changes which is the best of both worlds setup.
Java 16 is the current release of Java and we are looking forward to providing support for the upcoming Java 17 LTS release. Note that due to the decommissioning of the aging macOS 10.7 build infrastructure, GraalVM Community Edition releases based on JDK 8 are no longer being built. Node.js included in GraalVM 21.1 has been updated to 14.16.1, which is recommended for most users.
There’s one more significant change regarding Node in GraalVM. As of GraalVM 21.1, Node.js support is no longer included in the base GraalVM download. It’s now a separate component that you can install with the gu install nodejs command. JavaScript support continues to be a part of the base download, it’s just the Node.js support that’s installable separately. This change is for speed and clarity — it aims to reduce the size of the base GraalVM download, and to reduce confusion among some users who want to use GraalVM primarily as their main JDK.
Compiler Updates
Updates to the compiler are especially exciting because they improve GraalVM across the board since the compiler underpins the performance of all the various languages supported on GraalVM!
In 21.1 there are two particularly interesting compiler improvements:
One new optimization eliminates unneeded memory barriers on sequential volatile writes on x86. Numerous volatile writes sometimes occur in a sequence after the code has been inlined. The GraalVM compiler now omits the memory barrier for all but the last write in the sequence, nicely speeding up methods like ConcurrentHashMap.transfer().
Loops Updates
This release includes support in the GraalVM Enterprise compiler for vectorizing loops that have the hashCode-like pattern. Hashcode is often computed with an idiom like: hash = c * hash + array[i] , which the compiler can now recognize and vectorize.
Another improvement in this regard is a novel loop inversion optimization. This adds compiler support to GraalVM Enterprise to generate inverted loops from regular ones. Inverted loops have superior characteristics for instruction-level parallelism and optimization capabilities compared to regular, head counted loops.
Native Image
GraalVM 21.1 adds support for multiple locales in Native Image. Now you can specify at build time which locales should be included in the generated executable. For example, to switch the default locale to German and also include French and English, use -H:DefaultLocale=de -H:IncludeLocales=fr,en. All locales can be included via –H:+IncludeAllLocales. ResourceBundles are included by default for all selected locales, but this can be changed by providing a locale-specific substring when requesting the bundle.
JavaScript
The version of Node.js GraalVM supports has been updated to 14.16.1. In this release, the node and npm binaries are not included in the base download, but are instead available to be installed separately.
JavaScript in GraalVM 21.1 includes Truffle’s support for iterators, iterables, and byte buffers, which allows JavaScript iterators to be used via the Value API hasIterator(), getIterator(), hasIteratorNextElement(), getIteratorNextElement(); iterable objects from other languages to be iterated in GraalVM’s JavaScript runtime, e.g., via for-of loops and vice versa; for the host ByteBuffers and buffers from other languages to be used with JavaScript typed arrays, e.g., new Uint8Array(foreignBuffer), and DataView without copying; and to access ArrayBuffers via the Value API: readBuffer*, writeBuffer*.
Python
One of the more significant improvements in GraalVM 21.1 for Python is the enhanced support for Java subclassing and new interop APIs for better Jython migration path. Iteration over Python types from Java, catching and re-throwing Java exceptions in Python code, as well as implementing Java abstract classes and interfaces from Python are often-requested Jython features that GraalVM now provides, making the migration easier.
Tools
Tooling is a very important part of the GraalVM Ecosystem. In every release we’re trying to improve the developer experience by improving the tools that are part of the GraalVM distribution, including the debugger and profilers, etc. GraalVM 21.1 is no exception.
For example, VisualVM now also works on JDK 16 and has support for running on Apple M1 chips. The new version of the VS Code GraalVM Extensions, which includes a collection of features to make working with GraalVM, Java, and Micronaut applications easier:
Added unit test results visualization
Overview of the functionality
In Java 6, the library offered preconditions. These methods, which are still in Guava, let you check the preconditions in a method prior to calling a given method.
Guava offers other similar precondition tests, such as checkArgument() and checkPositionIndex(). The latter call, by default, checks for a value between 0 and the maximum size of the given array. Java SE ultimately added similar capability. In Java 7, the Objects class provided a similar set of checks as static methods. In Java 9, the number of test methods was increased significantly.
Collections
Guava grew out of Google Collections, so the set of collections it provides is deep indeed. Since the early days of Java’s Collections class, you could use unmodifiable options, such as Collections.unmodifiableSet and its siblings: list and map. However, it wasn’t until Java 9 that you could create these data structures using fluid method calls such as Map.of, and it was not until Java 10 that you could enjoy Map.copyOf. Guava has offered this kind of functionality for many releases, and so if you’re forced to use releases earlier than Java 10, Guava gives you a solution—and a lot more.
Guava provides immutable variants of all the standard collections (set, list, table, and so on), and it adds some additional, very handy data structures. The first of these is the BiMap, or bidirectional map. A BiMap solves a common problem that occurs with standard maps that are built on conventional key-value pairs: Sometimes you need the values to be keys and the keys to be values. With the BiMap, each entry serves as a key-value pair that can be reversed, so you can use the value to point to a key. The only requirement is that all keys and values be unique. A related and equally useful data structure in Guava is the multimap. This construct solves a problem that occurs routinely in Java programming: a key-value map in which the value is a collection of some kind, such as a list.
Lightweight cache
Guava provides several lightweight versions of features typically used in enterprise applications, including a cache and a publish-subscribe (pub-sub) solution. The cache works as a classic key-value store and cache values are returned if they are present; otherwise, they are computed via callable methods and inserted into the cache. You can configure the eviction method to your preference. Options include eviction based on size, on the absence of references, or on timing. Left to its own devices, the cache evicts on a least-recently used (LRU) basis t. You can manually evict individual entries.
The Guava cache does not use a thread in the background constantly monitoring entries looking for which ones to clean up. It does that maintenance work only on insertions and on reads, meaning that, for example, items can remain in the cache even after their allotted time has expired or when there are no other references to them. Eventually, if there is memory pressure on the cache, those items will be collected and removed. However, if you need a different eviction scheme, you can use the Guava cache API to add your own preferred eviction mechanism.
Extensions to Java primitive data types
In Java’s primitive data types there are no unsigned integers. Guava provides unsigned bytes, integers, and longs. It also provides boxed wrappers for these data types and thoughtfully includes the basic functions you’d expect in support of these types: minimum, maximum, and comparison functions; conversion to BigDecimal; as well as, toString().
]]>