jjkaczor 4 minutes ago

I wanted to like OS/2 *Warp 3.x?)...

However, it was the only operating system that I have ever used (before or since) that had some issues with it's provided disk drivers that ended-up deleting data and corrupting it's own fresh install... so, it didn't last long for me...

mikewarot a day ago

The cool thing about OS/2 2.1 was that you could easily boot off of a single 1.44 Mb floppy disk, and run multitasking operations, without the need for the GUI.

I had (and likely have lost forever) a Boot disk with OS/2, and my Forth/2 system on it that could do directory listings while playing Toccata and Fugue in D minor in a different thread.

I wrote Forth/2 out of pure spite, because somehow I heard that it just wasn't possible to write OS/2 applications in assembler. Thanks to a copy of the OS/2 development kit from Ward Christensen (who worked at IBM), and a few months of spare time, Forth/2 was born, written in pure assembler, compiling to directly threaded native code. Brian Matthewson from Case Western wrote the manual for it. Those were fun times.

  • JdeBP 8 hours ago

    There was an awful lot of nonsense put about during the Operating System Wars, a lot of it by people who had not the first bloody clue about operating systems at all.

    Sometimes it was a very clueless manifestation of the telephone game effect, where the fact that OS/2 API was designed to be easily callable from high-languages, without all of the fiddling about with inline assembly language, compiler intrinsics, or C library calls that one did to call the DOS API, could morph into a patently ridiculous claim that one could not write OS/2 applications in assembly language.

    Sometimes, though, it was (as we all later found out) deliberate distortion by marketing people.

    Patently ridiculous? Yes, to anyone who actually programmed. The book that everyone who wanted to learn how to program OS/2 would have bought in the early years was Ed Iaccobucci's OS/2 Programmers Guide, the one that starts off with the famous "most important operating system, and possibly program, of all time" quotation by Bill Gates. Not only are examples dotted throughout the book in (macro) assembly language, there are some 170 pages of assembly language program listings in appendix E.

  • whobre 2 hours ago

    > The cool thing about OS/2 2.1 was that you could easily boot off of a single 1.44 Mb floppy disk, and run multitasking operations, without the need for the GUI.

    It was cool, but don’t forget that you could do the same thing with MP/M on an 8-bit machine in late 1979.

    Even Microsoft developed a similar operating system a year later, but never released it. The code name was M-DOS, or MIDAS, depending who you ask.

  • cmiller1 a day ago

    > I wrote Forth/2 out of pure spite, because somehow I heard that it just wasn't possible to write OS/2 applications in assembler

    I was thinking about this recently and considering writing a blog post about it, nothing feels more motivational than being told "that's impossible." I implemented a pure CSS draggable a while back when I was told it's impossible.

    • JdeBP 8 hours ago

      We've all done it. (-:

      For some while I read people saying that, despite the existence of Paul Jarc showing how svscan as process 1 would actually work and Gerrit Pape leading the way with runit-init and demonstrating the basic idea, one could not do full system management with daemontools and wholly eliminate van Smoorenberg init and rc.

      * https://code.dogmap.org/svscan-1/

      * https://smarden.org/runit/

      It was one of the motivating factors in the creation of nosh, to show that what one does is exercise a bit of imagination, take the daemontools (or daemontools-encore) service management, and fairly cleanly layer separate system management on top of that. Gerrit Pape pioneered the just-3-shell-scripts approach, and I extended that idea with some notions from AIX, SunOS/Solaris, AT&T System 5, and others. The service manager and the system manager did not have to be tightly coupled into a single program, either. That was another utter bunkum claim.

      * https://jdebp.uk/Softwares/nosh/#SystemMangement

      * https://jdebp.uk/Softwares/nosh/guide/new-interfaces.html

      Laurent Bercot demonstrated the same thing with s6 and s6-rc. (For clarity: Where M. Bercot talks of "supervision" I talk of "service management" and where M. Bercot talks of "service management" as the layer above supervision I talk of "system management".)

      * https://skarnet.org/software/s6-rc/overview.html

      The fallacy was still going strong in 2016, some years afterwards, here on Hacker News even.

      * https://news.ycombinator.com/item?id=13055123

  • jeberle a day ago

    That is very cool. I had a similar boot disk w/ DOS 3.x and TurboPascal. It made any PC I could walk up to a complete developer box.

    Just to be clear, when you say "without the need for the GUI", more accurately that's "without a GUI" (w/o Presentation Manager). So you're using OS/2 in an 80x25 console screen, what would appear to be a very good DOS box.

    • kev009 18 hours ago

      OS/2 had an evolving marketing claim of "better DOS than DOS" and "better Windows than Windows" and they both were believable for a time. The Windows one collapsed quickly with Win95 and sprawling APIs (DirectX, IE, etc).

      It exists in that interesting but obsolete interstitial space alongside BeOS of very well done single user OS.

    • tengwar2 16 hours ago

      You could write your own GUI rather than using a text screen. I had to do that as I was working under OS/2 1.0, which didn't have the Presentation Manager. it did mean providing your own character set (ripped from VGA on DOS in my case) and mouse handing.

      Btw, I'd love to know where this idea about no assembler programs on OS/2 came from. There was no problem at all with using MASM, with CodeView to debug on a second screen.

      • ForOldHack 8 hours ago

        CodeView? You *must" have been there.

    • mikewarot 7 hours ago

      Circa 1987, DOS boot disk had a "Backpack" hard disk driver on it, so I could plug it in the parallel printer port and boot up with 300 Megabytes of my stuff instantly available as D: on on any customer machine. It made service calls a lot easier to manage, no more stacks of floppy disks.

      300 Megabytes!!!

      I had all my source code on it, archives, utilities, compilers, the whole shebang!

      • glhaynes 2 hours ago

        Surely 30 MB in 1987, not 300, right?

    • ForOldHack 8 hours ago

      Ahem the minute I got my 286 ( free ) home, I added Mode con lines=50 to my Dos 3.3/Tp 3.02+8087 disk, everything worked perfectly until you actually tried to do some text addressing, but I was able to pass physics 1 and 2 and Pascal and the program design and styles class with As the machine served me well. Now if I had the $400 for the extra 4mb of ram, it would have run os/2 2.1 in text mode... Or not.

      Oh the screen would go to snow often, and sidekick would bring it right back.

      How well did OS/2 handle the text modes for VGA?

      • JdeBP 7 hours ago

        Very well, because OS/2 1.x introduced, for the first time in Microsoft's MS-DOS lineage, a fully-fledged VIO subsystem that abstracted TUI applications programming wholly away from touching the hardware. (GUI applications programming with VIO still required some low-level stuff. But TUI applications programming was entirely in terms of high-level console I/O with streams of characters and low-level console I/O with a 2-dimensional output buffer. There was no mixture of directly poking hardware and calling into the machine firmware.)

        I regularly ran in 50-line VGA mode with zero problems. One could session-switch between full-screen OS/2 TUI programs (that were genuinely operating the hardware in VGA text mode, not simulating it with the hardware itself being used in graphics mode as became the norm with other operating systems much later and which OS/2 itself never got around to) and the Presentation Manager desktop.

        I even had a handy LINES 50 command that wrapped the VIO mode change function, that I gave to the world as one of many utilities accompanying my 32-bit CMD, and which in 32-bit form was layered on top of a debugged clean room reimplementation of IBM's never-properly-shipped-outwith-the-Developers'-Toolkit 32-bit version of the VIO API.

        You can still download it from Hobbes, today.

        * https://hobbesarchive.com/?detail=/pub/os2/util/shell/32-bit...

  • userbinator 18 hours ago

    Look at MenuetOS and KolibriOS for a newer multitasking OS, with a GUI, that also fits on a single floppy.

kevindamm a day ago

Preemptive multithreading is better than cooperative multithreading (which windows 3 used) but then it's de-fanged by allowing the threads and process to adjust their own priority and set arbitrary lower bounds on how much time gets allotted to a thread per thunk.

Then there's this:

   > All of the OS/2 API routines use the Pascal extended keyword for their calling convention so that arguments are pushed on the stack in the opposite order of C. The Pascal keyword does not allow a system routine to receive a variable number of arguments, but the code generated using the Pascal convention is smaller and faster than the standard C convention.
Did this choice of a small speed boost over compatibility ever haunt the decision makers, I wonder? At the time, the speed boost probably was significant at the ~Mhz clock speeds these machines were running at, and Moore's Law had only just gotten started. Maybe I tend to lean in the direction of compatibility but this seemed like a weird choice to me. Then, in that same paragraph:

   > Also, the stack is-restored by the called procedure rather than the caller.
What could possibly go wrong?
  • mananaysiempre a day ago

    16-bit Windows used the Pascal calling convention, with the documentation in the Windows 1.0 SDK only listing Pascal function declarations. (Most C programs for 16-bit Windows use FAR PASCAL in their declarations—the WINAPI macro was introduced with Win32 as a porting tool.) The original development environment for the Macintosh was a Lisa prototype running UCSD Pascal, and even the first edition of Inside Macintosh included Pascal declarations only. (I don’t know how true it is that Windows originated as a porting layer for moving (still-in-development) Excel away from (still-in-development) Mac, but it feels at least a bit true.) If you look at the call/return instructions, the x86 is clearly a Pascal machine (take the time to read the full semantics of the 80186’s ENTER instruction at some point). Hell, the C standard wouldn’t be out for two more years, and function prototypes (borrowed early from the still-in-development C++, thus the unhinged syntax) weren’t a sure thing. C was not yet the default choice.

    >> Also, the stack is restored by the called procedure rather than the caller.

    > What could possibly go wrong?

    This is still the case for non-vararg __stdcall functions used by Win32 and COM. (The argument order was reversed compared to Win16’s __far __pascal.) By contrast, the __syscall convention that 32-bit OS/2 switched to uses caller cleanup (and passed some arguments in registers).

    • JdeBP 7 hours ago

      The Windows as a porting layer story is a whole subject in its own right. But the WINAPI macro was a porting thing.

      But this is an example on this very page of the telephone-game problem that happened during the Operating System Wars, where the porting tool of the WINAPI macro that Microsoft introduced into its DOS-Windows SDK, allowing 32-bit programmers to divorce themselves from the notion of "far" function calls that 16-bit programmers had to be very aware of, becomes intertwined into a larger "Windows is a porting layer" tale, despite the two being completely distinct.

      That a couple of applications essentially supplied 16-bit Windows as a runtime really was not related to the 16-bit to 32-bit migration, which came out some while after DOS-Windows was a standalone thing that one ran explicitly, rather than as some fancy runtime underpinnings for a Microsoft application.

      * https://jdebp.uk/FGA/function-calling-conventions.html#WINAP...

    • Uvix a day ago

      I don't know if Windows started as a porting layer but it certainly ended up as one. Windows was already on v2.x by the time Excel was released on PC, but the initial PC version of Excel shipped with a stripped-down copy of Windows so that it could still run on machines without Windows. https://devblogs.microsoft.com/oldnewthing/20241112-00/?p=11...

      • p_l a day ago

        Before Windows 3.0 made a big splash, it was major source of Windows revenue - bundling a stripped down windows runtime with applications as GUI SDK.

        Windows 3.0 effort was initially disguised as update for this before management could be convinced to support the project.

  • dnh44 a day ago

    I loved OS/2 but I also remember the dreaded single input queue... but it didn't stop me using it until about 2000 when I realised it was time to move on.

    • JdeBP 8 hours ago

      You actually mis-remember. One of the things that was a perpetual telephone-game distortion during the Operating System Wars was people talking about a single input queue.

      Presentation Manager did not have a single input queue. Every PM application had its own input queue, right from when PM began in OS/2 1.1, created by a function named WinCreateMsgQueue() no less. There were very clearly more than 1 queue. What PM had was synchronous input, as opposed to asynchronous in Win32 on Windows NT.

      Interestingly, in later 32-bit OS/2 IBM added some desynchronization where input would be continued asynchronously if an application stalled.

      Here's Daniel McNulty explaining the difference in 1996:

      * https://groups.google.com/g/comp.os.os2.beta/c/eTlmIYgm2WI/m...

      And here's me kicking off an entire thread about it the same year:

      * https://groups.google.com/g/comp.os.os2.programmer.misc/c/Lh...

      • dnh44 2 hours ago

        Thanks for the reminder! It’s very likely I read that post as a teenager.

    • chiph 21 hours ago

      Because of that, I got good at creating multi-threaded GUI apps. Stardock were champs at this - they had a newsgroup reader/downloader named PMINews that took full advantage of multithreading.

      The rule of thumb I had heard and followed was that if something could take longer than 500ms you should get off the UI thread and do it in a separate thread. You'd disable any UI controls until it was done.

      • silon42 5 hours ago

        Why do I remember it was 50ms?

      • dnh44 21 hours ago

        I always liked Stardock; if had to use Windows I'd definitely just get all their UI mods out of the nostalgia factor.

  • maximilianburke a day ago

    Callee clean-up was (is? is.) standard for the 32-bit Win32 API; it's been pretty stable now for coming up on 40 years now.

    • jvert 14 hours ago

      Early in Win32 development, the x86 calling convention was __cdecl. I did all the work to change it to __stdcall (callee clean-up). Yes, it was done purely for performance reasons. It was a huge change to the codebase and as a side-effect turned up a lot of "interesting" code which relied on cdecl calling conventions.

    • to11mtm a day ago

      For 32 bit yes, although IIRC x64 convention is caller clean-up.

  • karmakaze 13 hours ago

    I remember learning this while I was on Microsoft's campus learning about OS/2 APIs. As I recall they said about 7% faster which is hard to leave on the table. I never had any issue with it the whole time I developed for OS/2 as well as Windows NT that used the same convention.

  • flohofwoe a day ago

    > Did this choice of a small speed boost over compatibility ever haunt the decision makers,

    ...in the end it's just another calling convention which you annotate your system header functions with. AmigaOS had a vastly different (very assembly friendly) calling convention for OS functions which exclusively(?) used CPU registers to pass arguments. C compilers simply had to deal with it.

    > What could possibly go wrong?

    ...totally makes sense though when the caller passes arguments on the stack?

    E.g. you probably have something like this in the caller:

        push arg3      => place arg 3 on stack
        push arg2      => place arg 2 on stack
        push arg1      => place arg 1 on stack
        call function  => places return address on stack
    
    ...if the called function would clean up the stack it would also delete the return address needed by the return instruction (which pops the return address from the top of the stack and jumps to it).

    (ok, x86 has the special `ret imm16` instruction which adjusts the stack pointer after popping the return address, but I guess not all CPUs could do that back then)

    • agent327 17 hours ago

      AmigaOS only used D0 and D1 for non-ptr values, and A0 and A1 for pointer values. Everything else was spilled to the stack.

      • flohofwoe 6 hours ago

        Ah ok, I remembered wrong then. Thanks for the correction.

  • rep_lodsb a day ago

    On x86, the RET instruction can add a constant to the stack pointer after popping the return address. Compared to the caller cleaning up the stack, this saves 3 bytes (and about the same number of clock cycles) for every call.

    There is nothing wrong with using this calling convention, except for those specific functions that need to have a variable number of arguments - and why not handle those few ones differently instead, unless you're using a braindead compiler / language that doesn't keep track of how functions are declared?

    • skissane 13 hours ago

      > There is nothing wrong with using this calling convention, except for those specific functions that need to have a variable number of arguments

      I think it is a big pity that contemporary mainstream x86[-64] calling conventions (both Windows and the SysV ABI used by Linux-and almost everybody else) don’t pass the argument count in a register for varargs functions. This means there is no generic way for a varargs function to know how many arguments it was called with - some functions use a sentinel value (often NULL), for some one of the arguments contains an embedded DSL you need to parse (e.g. printf and friends). Using obtuse preprocessor magic you can make a macro with the same name as your function which automatically passes its argument count as a parameter-but that is rarely actually done.

      OpenVMS calling convention-including the modified version of SysV ABI which the OpenVMS x86-64 port uses-passes the argument count of varargs function calls in a register (eax), which is then available using the va_count macro. I don’t know why Windows/Linux/etc didn’t copy this idea, I wish they had - but it is too late now.

    • mananaysiempre 20 hours ago

      > There is nothing wrong with using this calling convention

      Moreover, it can actually support tail calls between functions of arbitrary (non-vararg) signatures.

  • ataylor284_ 21 hours ago

    Yup. If you call a function with the C calling convention with the incorrect number of parameters, your cleanup code still does the right thing. With the Pascal calling convention, your stack is corrupted.

    • rep_lodsb 21 hours ago

      Yeah, it's really irresponsible how Pascal sacrifices such safety features in the name of faster and more compact code... oh, wait, the compiler stops you from calling a function with incorrect parameters? Bah, quiche eaters!

      • JdeBP 7 hours ago

        I have not read someone talking of quiche eaters in some years. Thank you for keeping the joke alive. (-:

treve 18 hours ago

This article is probably the first time I 'get' why OS/2 was seen as the future and Windows 3 as a stop-gap, even without the GUI. The OS/2 GUI never really blew me away and every early non-GUI versions of OS/2 are mentioned it always seemed a bit dismissive.

But seeing it laid out as just the multi-tasking kernel that it is it seems more obvious now as a major foundational upgrade of MS-DOS.

Great read!

pjmlp a day ago

After all these years COM is still now as cool as SOM used to be.

With meta-classes, implementation inheritance across multiple languages, and much better tooling in the OS tier 1 languages.

  • mananaysiempre a day ago

    Cool, yes. Useful or a good idea, I dunno. Reading through the (non-reference) documentation on SOM, I’m struck by how they never could give a convincing example for the utility of metaclasses. (Saying this as someone who does love metaclasses in Python, which are of course an inferior interpretation of the same academic sources.) The SOM documentation is also surprisingly shallow given its size: with a copy of Brockschmidt, Box, the COM spec, and the Platform SDK manual, you could reimplement essentially all of COM (not ActiveX though), whereas the IBM’s documentation is more like “here’s how you use our IDL compiler and here are the functions you can call”. (This is in contrast with the Presentation Manager documentation, which is much tighter and more detailed than the one for USER/GDI ever has been.) From what I can infer of the underlying principles, I feel SOM is much more specific about its object model, which, given the goal is a cross-language ABI, is not necessarily a good thing. (I’d say that about WinRT too.)

    And of course COM does do implementation inheritance: despite all the admonitions to the contrary, that’s what aggregation is! If you want a more conventional model and even some surprisingly fancy stuff like the base methods governing the derived ones and not vice versa, BETA-style, then WinRT inheritance[1] is a very thin layer on top of aggregation that accomplishes that. Now if only anybody at Microsoft bothered to document it. As in, at all.

    (I don’t mean to say COM is my ideal object model/ABI. That would probably a bit closer to Objective-C: see the Maru[2]/Cola/Idst[3] object model and cobj[4,5] for the general direction.)

    [1] https://www.interact-sw.co.uk/iangblog/2011/09/25/native-win...

    [2] https://web.archive.org/web/20250507145031/https://piumarta....

    [3] https://web.archive.org/web/20250525213528/https://www.piuma...

    [4] https://dotat.at/@/2007-04-16-awash-in-a-c-of-objects.html

    [5] https://dotat.at/writing/cobj.html

    • pjmlp a day ago

      Because at the time it was obvious, Smalltalk was the C++ companion on OS/2, a bit like VB and .NET came to be on Windows years later.

      Aggregation is not inheritance, rather a workaround, using delegation. And it has been always a bit of the pain to set up, if one wants to avoid writing all the boilerplate by hand.

      As for WinRT, I used to have it in high regard, until Microsoft management managed to kill everything good that UWP was all about, and now only folks that cannot avoid it, or Microsoft employees on Windows team, care about its existence.

      • owlstuffing 14 hours ago

        > Aggregation is not inheritance, rather a workaround, using delegation

        Although the designers of COM had the right idea, they implemented delegation in just about the worst way possible. Instead of implementing true delegation with multiple interface inheritance, similar to traits, they went pure braindead compositional. The result: unintuitive APIs that led to incomprehensible chains of QueryInferface calls.

        • pjmlp 5 hours ago

          In general the tooling sucks, which is kind of strange given how relevant COM is on Windows, even moreso since Windows Vista.

          It seems the Windows team is against having something like VB 6, Delphi, C++ Builder, .NET Framework, MFC, approaches to COM tooling, just out of principle.

          Thus we end up with low level clunky code, with endless calls to specific API like QueryInferface(), manually written boilderplate code, and with the IDL tools, manually merging generated code, because they were not designed to take into account existing code.

      • mananaysiempre 21 hours ago

        > Because at the time [the utility of metaclasses] was obvious, Smalltalk was the C++ companion on OS/2 [...].

        Maybe? I have to admit I know much more about Smalltalk internals than I ever did about actually architecting programs in it, so I’ll need to read up on that, I guess. If they were trying to sell their environment to the PC programmer demographic, then their marketing was definitely mistargeted, but I never considered the utility was obvious to them rather than the whole thing being an academic exercise.

        > Aggregation is not inheritance, rather a workaround, using delegation. And it has been always a bit of the pain to [...] avoid writing all the boilerplate by hand.

        Meh. Yes, the boilerplate is and always had been ass, and it isn’t nice that the somewhat bolted-on nature of the whole thing means most COM classes don’t actually support being aggregated. Yet, ultimately, (single) implementation inheritance amounts to two things: the derived object being able to forward messages to the base one—nothing but message passing needed for that; and the base object being able to send messages to the most derived one—and that’s what pUnkOuter is for. That’s it. SOM’s ability to allocate the whole thing in one gulp is nice, I’d certainly rather have it than not, but it’s not strictly necessary.

        Related work: America (1987), “Inheritance and subtyping in a parallel object-oriented language”[1] for the original point; Fröhlich (2002), “Inheritance decomposed”[2], for a nice review; and Tcl’s Snit[3] is a nice practical case study of how much you can do with just delegation.

        > As for WinRT, I used to have it in high regard, until Microsoft management managed to kill everything good that UWP was all about [...].

        Can’t say I weep for UWP as such; felt like the smartphonification of the last open computing platform was coming (there’s a reason why Valve got so scared). As for WinRT, I mean, I can’t really feel affection for anything Microsoft releases, not least because Microsoft management definitely doesn’t, but that doesn’t preclude me from appreciating how WinRT expresses seemingly very orthodox (but in reality substantially more dynamic) implementation inheritance in terms of COM aggregation (see link in my previous message). It’s a very nice technical solution that explains how the possibility was there from the very start.

        [1] https://link.springer.com/chapter/10.1007/3-540-47891-4_22

        [2] https://web.archive.org/web/20060926182435/http://www.cs.jyu...

        [3] https://wiki.tcl-lang.org/page/Snit%27s+Not+Incr+Tcl

        • twoodfin 19 hours ago

          If they were trying to sell their environment to the PC programmer demographic, then their marketing was definitely mistargeted, but I never considered the utility was obvious to them rather than the whole thing being an academic exercise.

          IBM wasn’t selling to developers. SOM was first and foremost targeting PHB’s, who were on a mission to solve the “software crisis”. We somehow managed to easily escape that one despite no one coming up with a silver bullet solution.

          • mananaysiempre 17 hours ago

            I admit to having no experience with this particular model year of PHB. Were they really that prone to being dazzled with seemingly advanced shinies? All of the metaclass talk must have sounded like a stream of meaningless technobabble if you weren’t not just a practicing programmer, but also up to date on the more research-oriented side of things (Smalltalk, CLOS, et al.).

            The talk was meaningful, don’t get me wrong, I just don’t see how it could sell anything to a non-technical audience.

            • twoodfin 14 hours ago

              What I mean is that the SOM feature checklist was downstream of trying to sell “software components” as a means of turning development shops into “software factories” to an audience of PHB’s.

              CORBA had a ton of “technobabble”, too: It wasn’t there to make the standard better for developers.

              • pjmlp 8 hours ago

                Yeah, but since this stuff comes in cycles, now we have WebAssembly Component Model, with gRPC predating it.

                And I would vouch that REST/GraphQL with SaaS products, finally managed to achieve that vision of software factories, nowadays a big part of my work is connecting SaaS products, e.g. fronted in some place, maybe with a couple of microservices, plugged with CMS, ecommerce, payment, marketing, and notifications based SaaS products.

    • cyberax 20 hours ago

      That's because the core of COM is just a function table with fixed 3 initial entries (QueryInterface/AddRef/Release). I had a toy language that implemented COM and compiled to native code, it produced binaries that could run _both_ on Novel Netware and Windows (Netware added support for PE binaries in 98, I think).

      The dark corner of COM was IDispatch.

      • mananaysiempre 17 hours ago

        Yeah, IUnknown is so simple there isn’t really much to implement (that’s not a complaint). I meant to reimplement enough of the runtime that it, say, can meaningfully use IMarshal, load proxy/stub DLLs, and such.

        As for IDispatch, it’s indeed underdocumented—there’s some stuff in the patents that goes beyond the official docs but it’s not much—and also has pieces that were simply never used for anything, like the IID and LCID arguments to GetIDsOfNames. Thankfully, it also sucks: both from the general COM perspective (don’t take it from me, take it from Box et al. in Effective COM) and that of the problem it solves (literally the first contact with a language that wasn’t VB resulted in IDispatchEx, changing the paradigm quite substantially). So there isn’t much of an urge to do something like it for fun. Joel Spolsky’s palpable arrogance about the design[1,2] reads quite differently with that in mind.

        [1] https://www.joelonsoftware.com/2000/03/19/two-stories/ (as best as I can tell, the App Architecture villains were attempting to sell him on Emacs- or Eclipse-style extensibility, and he failed to understand that)

        [2] https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev...

Pocomon 10 hours ago

'SteveB went on the road to see the top weeklies, industry analysts and business press this week to give our systems strategy. The meetings included demos of Windows 3.1 (pen and multimedia included), Windows NT, OS/2 2.0 including a performance comparison to Windows and a “bad app” that corrupted other applications and crashed the system. It was a very valuable trip and needs to be repeated by other MS executives throughout the next month so we hit all the publications and analysts.'

http://iowa.gotthefacts.org/011107/PX_0860.pdf

'The demos of OS/2 were excellent. Crashing the system had the intended effect – to FUD OS/2 2.0. People paid attention to this demo and were often surprised to our favor. Steve positioned it as -- OS/2 is not "bad" but that from a performance and "robustness" standpoint, it is NOT better than Windows'.

http://iowa.gotthefacts.org/011107/PX_0797.pdf

ZhiqiangWang 14 hours ago

OS/2 powered NYC Subway MetroCard vending machine for decades

wkjagt a day ago

> OS/2, Microsoft’s latest addition to its operating system line

Wasn't it mostly an IBM product, with Microsoft being involved only in the beginning?

  • mananaysiempre a day ago

    The article is from December 1987, when nobody yet knew that it would end up that way. The Compaq Deskpro 386 had just been released in 1986 (thus unmooring the “IBM PC clones” from IBM), the September 1987 release of Windows/386 2.01 was only a couple of months ago (less if you account for print turnaround), and development of what would initially be called NT OS/2 would only start in 1988, with the first documents in the NT Design Workbook dated 1989. Even OS/2 1.1, the first GUI version, would only come out in October 1988 (on one hand, really late; on the other, how the hell did they release things so fast then?..).

  • Hilift 19 hours ago

    Microsoft was only interested in fulfilling the contracts, and some networking components such as NetBIOS and LAN Manager, then winding down. This was due to Microsoft had already been in discussion with David Cutler, and had hired him in October 1998 to essentially port VMS to Windows NT. Windows NT 3.1 appeared in July 1993.

    https://archive.org/details/showstopperbreak00zach

  • p_l a day ago

    While NT OS/2 effort started earlier, Windows 3.0 was apparently an unsanctioned originally rogue effort started by one developer, initially masquerading as update to "Windows-as-Embedded-Runtime" that multiple graphical products were shipping with, not just Microsoft's

    Even when marketing people etc. got enthused enough that the project got official support and release, it was not expected to be such a hit of a release early on and expectation was that OS/2 effort would continue, if perhaps with a different kernel.

  • zabzonk a day ago

    Microsoft unwrote a lot of the code that IBM needlessly wrote.

    I worked as a trainer at a commercial training company that used the Glockenspiel C++ compiler that required OS/2. It made me sad. NT made me happy.

  • fredoralive a day ago

    This is from 1987, the IBM / Microsoft joint development agreement for OS/2 didn't fall apart until around 1990, and there was a lot of Microsoft work in early OS/2 (and conversely, non-multitasking MS-DOS 4.0 was largely IBM work).

  • chasil a day ago

    Windows NT originally shipped with an OS/2 compatibility layer, along with POSIX and Win32.

    I'm assuming that all of it was written mainly, if not solely, by Microsoft.

  • rbanffy a day ago

    If you count the beginning as the time between OS/2 1.0 up until MS released Windows 3, then it makes sense. IBM understood Microsoft would continue to collaborate on OS/2 more or less forever.

    • SV_BubbleTime 18 hours ago

      As an outside to all the history and lore… IBM is probably one of the most confusing companies I can think of.