Skip Ribbon Commands
Skip to main content
SharePoint

Blog

Aug 16
MiniDisc

I've been thinking for a little while about how I would present this on my blog. On "fast" social media (Cory5412 on Twitter) I just kind of jumped in and started sharing pictures and observations, but I still haven't written any single longer or more formal pieces. I do have some generic reference info up and a trip reports page, but no narratives yet. So, here goes!

Minidisc has been my Interest Object for the past few months, starting roughly this time last year when I visited a friend who had a machine. I bought that machine from him and some discs from another and I've spent most of the last year (and most of my disposable income) on researching, helping document things, how to use it, and coming up with narratives and takeaways.

To take a step further back: Minidisc is an audio format Sony introduce in 1992. The format uses ATRAC compression to fit 60, 74, or 80 minutes of audio onto a disc, idea being you can dub a whole CD to a minidisc, or create custom mixes with individual tracks from several CDs, and use the minidiscs on the go. The discs themselves are protected by a hard shell with a shutter that closes completely.

The format did extremely well in Japan. Sony sold portables until 2012 (globally) and bookshelf stereos until 2013 (in Japan). TEAC was still selling an MD recorder until literally December 2021, and Sony still sells new blank minidiscs today, in August 2022.

Several advancements were made to the format over time, including but not limited to:

  • The ATRAC1 codec got better every year
  • MiniDisc Long Play (MDLP) introduced in late 2000 added a new codec called ATRAC3 with 132 and 66-kilobit modes for up to 160 or 320 minutes of stereo audio on an 80-minute disc
  • A futher new format called HiMD was launched in 2004 along with the new ATRAC3plus audio codec, 1-gigabyte discs, better USB/computer connectivity, and a clever trick to use existing discs more efficiently

Sony struggled to sell MiniDisc in the US. The common lore is "The RIAA" (and subsequently: SCMS) but I believe it's significantly more complicated than that. The bottom line is Sony was and is a large multi-national corporation with a lot of power and it simply failed to make Americans aware and interested in the format. People who did somehow find out about the format liked the idea but found it significantly more expensive than CDs and tapes, even in The MDLP Era, a moment where the prices on everything about minidisc dropped dramatically.

It of course also bears mentioning Sony is and was also a large record label, the second largest one with over 26% market share as of 2021.

Sony attempted to relaunch the format in 1998, and had a decent strategy lined up, but still couldn't quite deliver. Retrospectively people blame – among other things – the availability (or lack thereof) of pressed discs, and the fact that presed albums on minidisc cost more than on CD or tape. Sony did also make explicit choices about the way they advertised the format.

It's popular to talk about MiniDisc as a failed format. The typical American narrative is that it came along about a decade too early, cost too much, and then died at the hands of the RIAA. Americans almost never acknowledge that it was wildly successful and is literally still in daily use in Japan today, or that it had a significantly bigger post-1998 and MDLP/NetMD-era boom in Europe. Most Americans don't even realize Sony was selling new minidisc hardware here until 2012.

Most of this really lays at Sony's feet. This may become a series for me but Sony was making weird choices about the format literally constantly from literally the first day. Yes, the RIAA sucks, but they didn't singlehandedly kill Minidisc.

Oct 04
Windows 11
I missed out on this earlier in the year. I went on a long vacation almost exactly after it was announced. I’ve been using it on one of my machines, albieit shortly after I set that up, I set up a new desktop and made the specific choice to run that machine on Windows 10. 

I want to preface this post by saying I like Microsoft software. I went from being a child who said stuff like “Micro$haft” and “windbloze” to running Active Directory, SharePoint, and Exchange at home to provide services to all my home computers, most of which are Windows machines.  

I’ve enthusiastically been running Windows beta releases since the 7 beta back in 2009 and I usually make a point of migrating a few machines to the new OS as fast as possible upon its release, even if it’s not every machine I run. 

I’m really struggling with Windows 11, however. I think it’s half baked, the requirements are too restrictive, Microsoft has done the worst possible job communicating about it at every step of the way, and several other components of the new OS version are poorly considered. It also has the historic problem of failing to mesh old and new components. 

Windows 11 itself is essentially Microsoft taking some of the security and driver things from the canceled Windows 10X and putting them into a new main-line version of Windows that will replace Windows 10 soon. That’s even though just five years ago Microsoft told us Windows 10 was The Forever Windows. As has been standard for the last fifteen years, Windows 11 makes some superficial changes to the OS and some substantive changes under the hood. 

The Requirements Thing 

The system requirements for Windows 11 are steep compared to all previous versions of Windows. Microsoft throws several years' worth of Plateau machines out. The minimum requirements include an 8th generation Intel Core CPU or a 2nd generation AMD Ryzen CPU. For anybody keeping score at home, as of this writing those CPUs are under three years old. Even Microsoft is still shipping a machine with a 7th-gen chip.  

Apple is somewhat famous for doing exactly this – killing support for an old machine before the end of the useful life of the hardware. They’re doing almost the same thing this year. We all roll our eyes at Apple but for the most part Apple’s core market is “people who can afford this” -- Microsoft, for better or worse has been awarded “the rest of humanity” as its market. 

The “to be fair” point is that Microsoft has committed to security patches for Windows 10 until October 2025, which will give Windows 10 around a 10-year total lifecycle. To extend the Apple comparison – in October 2025, 7th-gen computers running Windows 10 will be approximately as old as most Macs are when Apple discontinues their OS support.  

Worse than the updated requirements has been the way Microsoft communicated everything. 

  • First, they published the new requirements. 
  • Then, they published pages suggesting that Windows 11 had “hard floor” and “soft floor” requirements for different support commitments. 
  • Then they reneged on the hard/soft floor thing and doubled-down on the requirements. 
  • Then they said that the requirements would apply in certain circumstances (like virtual machines) anyway. 
  • Then they lifted the requirements to use the Insider Preview software. 
  • Then they put the requirements back in. 
  • Then they said you’d be able to avoid the requirements, but only if you install by booting from a disc or a USB drive. 
  • Then they said they might deny unsupported devices security patches. 
  • Then they said they’d add a disclaimer to allow you to install. 

Microsoft: What the fuck? 

To be clear here, Windows 11 does not need any of the new hardware. It runs fine on existing hardware. Microsoft has cited “additional security” as a reason, which, fair, maybe make a Trusted Windows edition? They cited “additional reliability” as a reason, but, it’s a vanishingly small increase in reliability. The miniscule improvement in reliability is that if you use a new system you will experience two instead of three crashes out of a thousand opportunities for a crash to happen. 

As of this writing, it’s unclear whether Microsoft will relent from these changes. They should, but the whole thing has a bad taste. I get why Microsoft wants to do this, and, maybe they should clarify their support policy, but many of the people and organizations running Windows on old machines have local expertise and don’t really need Microsoft’s support or guarantees – just for the software to work. 

Microsoft will either renege and Windows 11 will run on all the same hardware Windows 10 runs on or the next several years will be a terrible expensive burning hellscape of people trashing perfectly good machines that meet their needs in order to buy new computers during a chip and logistics shortage to achieve slightly fewer crashes. Except, people are going to put all the same software from the ‘90s on it and use terrible, cheap components wherever they can, putting those crashes right back in and completely negating security improvements. 

I suppose the third possibility is that lots of people will make good on decade-old promises to finally switch to Linux. That or Microsoft will have to deal with lots of unpatched Windows 10 systems having a meaningful impact on overall Internet security, like those times they extended Windows XP patches. 

The Interface 

Windows 11’s headlining features really are the interface. At absolute best, I’m ambivalent to most of the changes. 

At absolute worst, I hate the new changes to the taskbar and start menu and am confident they’ll make my professional and personal lives worse. I’m sure it looks great if you’re a High Concept Thinking Man Microsoft executive whose only real programs are Teams and Outlook and you’ve wanted your Windows computer to look more like an iPad for an entire decade. The rest of us have these things because we have real work we need to get done.  

Apple has this same problem: Leadership at tech companies who make OSes do not really use their computers heavily. Mac OS is no longer functional on 13-inch laptop displays. Windows is not far behind. 

I don’t run multiple displays so Microsoft’s improvements there aren’t likely to help me that much. 

One thing I have started doing in the last few years on my Macs is enabling the “reduce transparency” and “increase contrast” options in the accessibility area. It’s a moderately subtle effect that makes it clear where all the clickable areas are and doesn’t meaningfully reduce the functionality of anything else on the system. Microsoft is still behind in this aspect. The reduce transparency effect exists but the only additional helper after that is the ability to swap over to a theme that destroys how everything appears. I have the same criticism of Windows 10 and 8/8.1 . 

The Pack-Ins 

I was originally going to complain in this section about Teams. Microsoft is going to spend a couple years trying to “Make Teams Happen” and then when it finally doesn’t they’ll remove it, pissing off the three people who were using it. Teams is a giant electron app that tries to do almost everything for everyone. It’s an abomination and it uses, on the low side of things, between a half a gig and a full gig of RAM whenever it’s running at minimum. 

Teams in Windows 11 doesn’t start running until you invoke it but if you ever do by mistake, and Microsoft takes lots of pains to make sure this will happen, it will probably be running in perpetuity. I will start using it if any number of my friends do, but to date nobody has. In comparison, Telegram on my system is using 130 megs of RAM and Signal is using 95. 

The other pack-ins are fine. Microsoft updated a few icons that implied some updated programs: but Paint, Notepad, and WordPad are the same as they’ve been since windows 7 or earlier. Calculator is still bad. The Clock app now includes a “focus assist” tool. 

The tools are Good Enough, I guess. Most of why I use a computer doesn’t center around the built-in tools, so the main thing is that they haven’t meaningfully regressed. Several of the ones I’ve mentioned, I replace anyway: I’m a big fan of Visual Studio Code or notepad++ for plain text, Paint dot net for screenshots, and using Word or OneNote for styled text. 

The Phone Games 

Microsoft has announced they are going to include Amazon’s Android app store with Windows 11, but, that’s going to come out later. I would like it better if it were Google Play, but I suspect this was both the easier deal to broker and that Amazon’s app store may already have software set up for whatever’s needed to run this software on x86 – whether that’s an ARM emulator or the code being in an intermediate language that can be interpreted on different CPUs. 

I like and want this feature in a future Apple Silicon Mac. I’m confident it’ll perform well in that context, since there will be no emulation and the Apple Silicon Macs are already much more powerful than most other existing iOS devices. However, even on the Mac I’m not sure I want to use a bunch of phone apps when dedicated Mac apps or even the web sites might be better. 

This feels like the final admission in the long, protracted process of admitting UWP has been a failed technology. 

The Missing Stuff 

There is still no damn universal screen recorder. Why? Apple added it to the Mac in 2009. Literally everyone who uses computers has a reason to record their display. The technology is literally already built into the OS and Microsoft will not allow you to target it at “whole monitor”. 

Otherwise, Windows 10 and 11 are mostly “batteries included” in the sense that there’s not an awful lot I would add that should be part of the base OS.  

That said: the lowest hanging fruit for improvement is probably in stuff like, adding a native local interface to the password manager Microsoft has half-heartedly been adding to Edge and Microsoft/Office 365, including a generator and the ability to make/view passwords for arbitrary other things.  

The Local Accounts 

I almost didn’t write about this, because to be honest, I forgot. Microsoft is requiring a Microsoft Account to sign into Windows 11 Home. You will not be able to make a local account at boot. Pro, Enterprise, and Pro for Workstations do not have this limitation. 

I don’t have much to say about this. I don’t mind. From a pure technology standpoint I think that it makes more sense for most people. 

I think we’ll find the people opposed to this will just go to Windows 11 Pro (or a Mac or Linux) and consumers are likely already using Microsoft Accounts to sign in so it’s not a change for that group. 

This is yet another symptom a symptom of a bigger shift in computing away from local resources to centralized ones. It’s happening for a variety of reasons. In theory, it’s a good movement. Centralizing identity management into trustworthy organizations is a fine idea and it can make things like two-factor authentication, passwordless authentication, and having multiple devices easier. 

This is a Bigger Issue than just Windows 11 but the thing I fear about this, the reason I know why people don’t love this change, is that it’s not clear whether Microsoft takes its role as identity provider for 90% of extant computers seriously. In a world where Microsoft can terminate an account at will and where it’s difficult or impossible to contact a support organization, I don’t know if tying everything to that kind of account is wise. 

With Microsoft, there’s relatively little evidence I’m aware of that this is a particularly big problem, but Microsoft is completely opaque on these issues, and we do know it’s a Very Big Problem with Google. 

The Solution 

This is spicy but here’s my Opinion™: Microsoft should cancel the release of Windows 11. That would look like this: 

  • Withdraw the beta builds and recommend people move back to Windows 10. 
  • Withdraw or extend the publicized End of Support for Windows 10. 
  • Put Windows 11 back in the oven. 

Microsoft clearly needs some time to review the holistic picture of Windows as an OS, as a piece of software, the individual pieces of software that comprise the whole, and the ecosystem at large. They need to do a better job reviewing the hardware requirements and communicating better on why they’re making the changes they are. 

Windows 11 could be a legitimate improvement over 10, but right now it has the potential to be yet another “bad release in between two good ones”. That’s a bummer because I usually like the bad releases. Millennium Edition, Vista, and 8/8.1 all had legitimate technical and directional improvement and were done dirty by the tech press, tech enthusiasts who hate change, and in some cases ecosystem problems. 

I have some wish list items for Microsoft that aren’t related directly to Windows itself, but could benefit most Windows users. 

More generally, I think that Microsoft really needs to address concerns people who might use the home SKU have in terms of what guarantees there are that Microsoft won’t disable your account. There’s a greater discussion to be had that Microsoft isn't engaging in so far. 

My Plans 

I am going to eventually switch to Windows 11 on at least some of my machines. Some of my frustration comes from the fact that I have four or five personal computers running Windows 10, none of which are officially supported. I am going to leave my Surface Laptop running 11 as long as Microsoft allows it. My other old machines will stick around on Windows 10 until they get replaced, whether it’s by Windows 11 machines or by other kinds of computers. 

I’d already been thinking about getting a Mac, for example, and I realistically can replace what the Surface Go and Laptop do with an iPad, perhaps a slightly nicer iPad than the one I use for phone games. My secondary box I use while I’m working can be a virtual machine on a server or a remote desktop connection to a single desktop I use as my main Windows computer as well. 

I have a lot of hardware in the “still performs well but won’t officially run Windows 11” band. Once it becomes clearer what Microsoft’s direction here will be, I may put Windows 11 on some of it, but I’m also going to take some time looking at how well Linux runs on some sideboard machines. Everything I need for most of my computers will be fine on Linux, so I may start poking at that. 

We’ll see. 


May 17
Computer Power Bands


Post Meta: I haven’t posted in a million years. Oops. This is a super rough off the cuff set of thoughts that felt better all in one place rather than in an 87-post Twitter thread. My apologies for any rough or awkward wording. I spat this out and decided to copy and paste it onto my blog. You have been warned and, if you decide to read this: My apologies. 


A very interesting thought recently got posted to the forum. 

Paraphrasing here, in a thread predicting new iMacs, someone asked if anyone had pre-ordered one of the new M1-based iMacs, and a few people responded, and somebody posted something to the effect of: Why do people care so much that this is a mobile chip? It’s got amazing performance. 

This poster is right and they mentioned they have an M1 air and have or are going to order an M1 iMac. 


They’re so right in fact that they went on to give three examples of other times Apple built a platform and then used it in both desktops and laptops: Macintosh 630 (which later became the PowerBook 190 and was also the basis for the Macintosh 6200 and the PowerBooks 1400, 2300, and 5300), Mac mini G4 (and every Intel-based mini up to the 2014 model, which are all based on the iBook G4 and then the MacBook Pro, respectively), and the iMac G3 (which shared a platform and performance numbers wit both the PowerBook G3 and PowerMac G3, marking one moment that a single chip was powering all of Apple’s computers). 


In addition to that, there was the time in 2014 Apple introduced the “ULV” 21.5-inch iMac which is, again, a MacBook Air in a desktop (and still on sale today), and all the times iMacs employed primarily mobile technologies like MXM slots, 2.5-inch disks, SODIMM memory, so-on and so-forth. 


I think the disconnect here may be that the iMac is... aimed at the low end of the Mac desktop market. Today’s 24-inch iMac has its forebears in machines like the original iMac and anything that bore the “Performa” name. 


This isn’t bad per se-- I’ve written in this space before about how almost all “Low end Macs” were poorly misunderstood and done a huge disservice, ironically by a site claiming in the late ‘90s to be all about the reuse and life extension of Low End Macs. (I’m going to write more about this, specifically, in the future.) Most low end computers exist to serve a specific market or purpose and I strongly believe all the M1 Macs do it well. 


The problem, then, is that the M1 processor itself punches WAY above its weight. It’s well more powerful than almost anyone predicted at the start of 2020 when the ARM Mac rumors started in earnest. THe M1 Macs easily outperform the remaining 21.5-inch iMac, Mac mini, most of the 16-inch MacBook Pros, most configurations of the 2020 iMac, and to be honest they’re almost certainly within spitting distance of the basic Mac Pro configuration. 


So why wouldn’t every single Mac buyer get one? 

The simple answer is that the M1’s low end limitations prevent it from doing some things those other Macs can, even though they are slower than it. Sometimes, you just, non-negotiably need 30 gigs of RAM, three monitors, or six PCI Express slots and the M1 Macs we have so far don’t do any of that. 


The M1, as a chip, will almost certainly be around in low end Macs for several years, even if Apple refactors what those systems look like, exactly as they do with iPad and iPhone processors. I have no trouble believing that someone within the power band the M1 systems are supposed to serve will be happy with one for a decade (especially if they go for sixteen gigs of RAM). 


The nerds who talk about the iMac’s inflexible design already have more powerful computers anyway and if they tried to use an M1 system, it would probably work, but it would be poor and they’d thrash the thing into its own early death. Because sometimes you just need 30 gigs of RAM and if you don’t have it your OS thinks your SSD is a good enough substitute. 


The Plateau connection here is that most of the systems Apple has replaced so far haven’t gotten much faster over the past few years. The 2018/2019 MacBook Airs that the M1 model replaced moved from a 15-watt CPU family from 2013 to a 7-watt CPU design and most reviews of it said that it wasn’t a big performance upgrade as much as a quality-of-life upgrade. The reason you replace your 2013 MacBook Air today isn’t that it doesn’t run Big Sur or that it’s too slow to run Office but because you’ve outgrown its RAM ceiling or because its battery or hinge has worn out. 


On the desktop side of things, and on the Windows side of things, you can run systems even longer if you want. Old iMacs and Mac Pros have internal flexibility that makes unofficially running newer OS releases possible and Windows itself will run on almost any computer from newer than 2005. 


The main reasons there to upgrade to a newer computer are because you’ve hit the ceiling on an older one, upgrading it is more expensive than getting a new one, or because you’re at a point where you can get something only a few years newer and achieve an energy savings enough to pay for the newer machine. (Not directly of course because savings doesn’t make up for cash in hand but when you’re a hobbyist running a sideboard or task-box, swapping a Core2Quad for a low-watt third or fourth generation system you’ll get both a huge boost in performance and a giant energy savings as a boost. It helps that third and fourth generation Core systems are almost free because they’re off lease from corporate and institutional environments. 


So, the main advantage of any M1 system then is if your old computer wore out, you want a Mac, you need some of the sundries (the display/webcam/audio on the iMac, for example) or you have a workload that takes advantage of CPU horsepower but almost nothing else, they’ll be great. 


I’m on the fence myself because I’m using a laptop with 8 gigs of RAM in lieu of my normal desktop and am regularly running up against that 8-gig limit in ways I wouldn’t have on my desktop with 16 gigs of RAM, and it’s giving me pause about the long term viability of a desktop locked into sixteen gigs of RAM, especially because, let’s be real: Apple handles RAM worse than Microsoft does and on the M1 more parts of the system are using the same pool (vs. On my desktop where the graphics card has its own memory). There is no way around this fact. 


I’m, personally, also real stand-offish on running Office on MacOS. Outlook specifically is super important to my home and personal computing life and while it does exist on the Mac, it doesn’t do half of what I need and what I’m getting out of Outlook on Windows. Word/Excel/OneNote and PowerPoint on the web have advanced to the point where to be honest my day-to-day now involves Outlook and OneNote on the desktop (and: OneNote on the Mac is fine, it matches what the Windows 10 UWP OneNote and the web app can do and today that’s Good Enough for me the overwhelming majority of the time) and Word and Excel on the web. 


Long-term, if I were to switch most of my computing to a Mac, this trend would need to continue, which regrettably would probably lead to me moving data off of my on-prem SharePoint server to my Office 365 account, which is a set of thoughts for another day. 


Oct 26
Computer As A Service

Global Pandemic continues apace, and I'm probably going to stop writing for the next few years in favor of NaNoWriMo

I've gone back and forth and written on this a few times over the last several years, so it probably won't surprise too much that I'm doing it again, but I've been thinking of the modern phenomenon of large corporations working their way into home computing, between stuff like not explicitly exposing options to do backups locally or making local backup options more difficult than necessary (especially on mobile, which I think suffers from a history of historically not being a place where people have data, which is now thoroughly untrue) such as the iPhone backup requirement to store backups on your home computer's boot disk, which might now be smaller than your iPhone is.

There have been a couple of instances lately of people getting locked out of their Google accounts for relatively innocuous activity. A friend of mine moved internationally and now has lost access to his Gmail account. There's also a recent instance where a Google Employee's husband has lost access to his Google account. Facebook is doing weird stuff with Oculus and real-name accounts.

This whole thing comes from looking at the state of modern computers. Apple, Google, and Microsoft all expect you to have accounts in their ecosystems and use those accounts to authenticate to your home computer. If my friend had been a Chromebook user, not only would an account be missing but he wouldn't be able to sign into the machine at all. All three of these are essentially selling "computing' as an experience instead of "computers" and "software" as individual products.

I've sung the praises of this approach before, but the qualification here seems to be that the switch to a services-based computing ecosystem should come both with an escape hatch for experts, enthusiasts, developers, and businesses, and should come with the equivalent support infrastructure. There also needs to be safe harbor provision or evacuation provision should the provider determine that your data doesn't meet their terms of service. More beneficially, legislation either requiring the institution of a support mechanism and some kind of common carrier (or similar) status for these services, so they don't up and randomly delete all your stuff.

To be honest, I've predicted for a couple years now that we would get to a point where the actual computing power would end up being in the cloud, and at home we'd just get thin clients. Of course, on the other hand, you can now save a few kilobits/second of throughput on a video conference if you have an nVidia RTX video card and have the time to make and transmit a detailed 3d model of your body, so there's a question of whether or not that will ever come.

So, on one hand: Perhaps we should just stop. We should give email and identity control services back to smaller entities that have the ability to hire real humans to do support, and allow smaller companies to set up services for scenarios where it's beneficial. Really, a managed service provider but for home computers would probably be beneficial, and at least in the case of Apple + iCloud and Microsoft + 365, I think that this is how I was trying to see cloud services.

The way cell phones are financed, which has spilled over into computers, is an interesting model here. Things like iPhone Upgrade Plan have led to Apple Card Monthly Installments and Microsoft Surface All Access. With plans like these, customers can roll the cost of a computer, software and cloud data storage for it, and warranty support, into a single monthly cost. Of course, these plans don't appear to offer more support than standard. As a computer OEM, Microsoft does offer more support for Surfaces than they do for Windows on other devices.

With the Apple plan, you own the device at the end of your installments and can choose to keep it or go start a new plan. I'm presuming that the Microsoft plan is similar. Microsoft lets you choose an appropriate accessory and a copy of Office 365 as part of the plan, which reduces complication, it doesn't look at the moment like Apple's Monthly Installments offer any ability to pre-pay for iCloud storage, but.

The financing side of it is where things get hairy. On one hand, it feels compelling to say to somebody who "needs a computer" and might need help with it that they should consider one of these plans. It's also potentially interesting to people who upgrade computers frequently either because they wear out (whether by way of battery or something else) or because their needs are particularly high end. On the other hand, because these are all tied to banks, it's tough to say that I feel like this is going to be available to people who might need it. So, it mostly remains an option for middle class people or people who prioritize setting up some plan like this up.

Which brings me back to the original point. Chromebooks and cheap Windows computers do everything in their power to tout the benefits of using a Google or Microsoft account to sign into the machine and run your life from. If something happens to that account, it can cause no end to problems, and there may be no way to get in contact with support or as a non-paying user, there may be no support from a person available.

I've suggested this before but there's an alternative universe here where the local branch of the combined post office and telco provides home oriented managed computing services.

The gotcha here is that, really, this is probably impossible to do with technologies people actually want to be using (Windows/Mac) from a licensing perspective, relative to using these things in an in-organization setting or for business, and this business probably wouldn't be profitable. That this is a concern is of course indicative of a larger problem in modern culture. The established tech providers aren't interesting in allowing something unless they can provide it, and by and large, nobody is thinking about providing such services to home unless there's a meaningful profit involved. Support, by its very nature, is not profitable.

Oct 19
iPhones Twelve

Softball tech thoughts for my first note in a few months.

Apple announced new iPhones last week and they are mostly notable but also not notable, or, perhaps, mostly not interesting to me. They are faster, have a slightly different shapes, there's a new magnetic accessory system called MagSafe, which cases, wallets, and power adapters can connect to, and, most interestingly to me, Apple has, following the same announcement on the Apple Watch, elected to stop including the power adapter and the headphones.

As often happens, something people were excited for a year ago happened very slightly differently than people imagined. In this case, Apple dumped the power adapter, the headphones, and swapped out the old USB Type A cable for a Type C cable.

The consternation here is that not many people have Type C chargers already, and as such, Apple stands to profit from the sales of other USB chargers. Although, Apple does sell those chargers, I don't really think that this is as big a deal as people make it out to be. Most iPhone buyers have Lightning cables already and most existing Android buyers already have Type C chargers, and to be perfectly honest, third party Type C chargers have now fallen in price, with one from Anker weighing in at around $14. Apple's own 20-watt type C charger has also been reduced to $20. IKEA sells a compatible charger for $12 and also an A to Lightning cables for $9.

It's tough because, on one hand, these are potentially added expenses. That's just with power delivery. The headphones are another issue, but ultimately, this is the kind of switching pain that would have happened anyway. The cheapest way to move forward for someone who does have a Type A charger but somehow doesn't have any extant Lightning cables is to buy a type A to Lightning cable, which can be done for under $10.

At first, I thought this applied only to new iPhones, but Apple has updated every iPhone page with the same information, so this now applies even to the least expensive new iPhones.

Same deal on headphones. The cost on those has been reduced and most people already have them.

The next most interesting aspect for me is MagSafe, which is almost enough for me to consider upgrading my phone earlier than I might normally do, although I haven't really been leaving the house a lot so I don't have the use case I would have claimed last year.

At the end of the day, I think this'll be fine. I've heard about how it might have made more sense for Apple to switch iPhones to Type C, and I don't disagree, but ultimately this is less severe than when the 30-pin connector was replaced, where docks and existing cables had to be replaced when the Lightning ecosystem came in.

Audio is still a thing, and I still consider the loss of the headphone jack to be more severe than this. The main thing is ultimately that wireless headphones cost more than new charger cords or even new chargers. On the other hand here, the original reason I was worried about audio on the iPhone X was because I'd been using a battery and headphones at the same time on my walks playing Pokemon Go, which, I haven't had to do on my own for a while.


Jun 29
It Happened

I was wrong about my previous predictions Apple would not be switching to ARM processors. Approximately 30 minutes of the WWDC 2020 keynote last week were dedicated in a nearly "one more thing…" style reveal that Apple would be switching to what it's tentatively referring to Apple Silicon.

To get this out of the way: Apple is not merging the Mac and iPad as platforms. That this thought persists is very strange and to be honest indicates to me that people straight-up don't believe Apple telling them this for literally a decade. There's a thread about this on Ye Olde Computer Forum and someone mentions this possibility literally every page. In fact, Craig Federighi, SVP of software engineering at Apple has doubled down on user control not being removed from the machine.

Lots of details emerged over the last week, and I think we'll still be finding things out over the course of the next few months until Apple releases the initial Apple Silicon Macs "by the end of this year."

As a start, here's what basically matched the playbook from the 2005 PowerPC to Intel transition:

  • There are still new Intel-based Macs in the product pipeline
  • Universal Binary 2 – Applications compiled to run on both kinds of computers
  • Rosetta 2 – Intel binaries will be translated to run on Apple Silicon at install time
  • Apple showed software from a few vendors (Adobe and Microsoft, and its own Final Cut Pro) having been ported to the new platform
  • Apple showed some software (Maya and a Tomb Raider game) running under Rosetta
  • Quick Start Program and Developer Transition Kit
  • Intel-based Macs will receive new OS versions and support "for years to come"

Anybody who has followed Apple for a long time will have seen this coming a mile away. I've seen people talking about this possibility for literally a decade now, back when it was preposterous to imagine based on the kinds of ARM silicon that were available. Microsoft beat Apple to the punch, twice, but Apple is doing a better job here. Microsoft Office isn't available for Windows on ARM64 and Apple's entire Mac software library is, and Apple already has Adobe and (ironically) Microsoft on board.

In 2005, a problem Apple had was that its two sets of products based on PowerPC G4 and G5 CPUs couldn't advance very fast because, to put it frankly, IBM and Motorola didn't really have roadmaps that met the needs of the Mac as a platform. IBM was largely willing to (in comparison) move mountains for products like Xbox 360 and PlayStation 3, which had very high sales volume expectations compared to Macs, but couldn't give Apple the time of day. It hurts, here, that Apple probably also wasn't the best partner here, kind of playing Morola and IBM against one-another, but that's not what this is about.

Intel had products ready to go for every single product in Apple's stack, and they were screamers compared to what Apple had before. Even a Pentium M or Pentium 4 was meaningfully faster on a better functioning and better engineered platform than the existing Macs, but the Core architecture where Apple released its first generation of Intel-based Macs was seen widely as the revival of Intel, in an era where they briefly fell behind and essentially sat next to AMD in performance progress. If you bought a Mac in 2006 it was typically around four times as fast at minimum compared to the performance of the machine it directly succeeded.

Today, I don't think the story is quite that way. iPad Pro processors are said to outbench MacBook Pro processors, which is impressive, but we're not talking about a 200% difference in score, we're talking about around 110-125%. (I think I have those numbers right, basically, iPad CPUs are a little bit faster than current Mac laptop CPUs.) The other unanswered question of course is that those numbers come from GeekBench, which isn't, really, a very long-running test.

I don't think we have to worry though. The Long Plateau basically means that an ARM-based Mac's primary competition is going to be laptops from 2013-2015 and that most people on Macs today still think systems from even earlier than that perform fine. So, if an application performs about as well as it did on a system that's just shy of a decade old, it'll be considered a success.

That's a huge shift from where things were in 2005. In 2005, a Mac from 1995 was literally unusable as a modern computer unless you crammed a thousand dollars of upgrades in it, and even then, arguably, it worked but wasn't a good experience. And, if you wanted to have kept over time, your "thousand dollars of upgrades" (in 2005) is much closer to $5000 in upgrades if not more when they're more current.

The Developer Transition Kit is super interesting. In 2005, it was a box the shape of a Power Macintosh G5 with a very lightly modified Intel stock motherboard, a 3.6GHz Pentium 4, 1 gigabyte of RAM and a "fine" hard disk. (I think it was 160 gigabytes.) Initial reports, by the way, were that it felt way faster than PowerPC Macs. The kit cost $1000 to lease, which is probably around what it cost to build given that that was around the middle of Intel's consumer line at the time. The modern kit is a $500 box taking the form of a Mac mini with a re-factored iPad Pro platform inside. An A12Z CPU, 16 gigs of RAM, 512GB of storage, and "a variety of Mac ports" (probably means the same as the regular Mac mini, but without ThunderBolt 3).

Based on the keynote, everything runs well on the DTK. Apple could button it up and wrap it neatly in a box and sell it to consumers today and they'd buy it and almost certainly like it a lot, especially if it cost any less than the existing baseline Mac mini. The generous 16/512 (vs. 8/256) configuration helps a lot. They won't, and that's probably the most impressive thing. Whatever comes out is very likely to feel like a revolution.

There are some unanswered questions. During the Intel move, Apple took several months to deliver dual booting as a feature with Boot Camp. They've said outright that booting alternative OSes isn't yet available, but they haven't given any details on whether that's a drivers issue or a bootloader issue or a locked firmware issue. Long-term, I suspect things will open back up as they did on Intel, but it'll take a little while to get there.

One observation, looking at the last fifteen years of Intel Macs, is that the way people do computing here in 2020 is unimaginably different than it was in 2005. In 2005, nominally speaking, delivering Boot Camp or enabling something like VMWare Fusion was critically important because Mac users still very frequently had need to run Windows software. In 2020 though, is that really the case? Is the target demographic for running Windows software capable of anything different? Microsoft Money and Quicken have largely been replaced with services like Mint and YNAB. "Mobile" is the most profitable sector of gaming, and actual, physical PCs cost less than they ever have. In 2002, Virtual PC 5 with Windows 2000 cost $230. Today, you can get an entire computer licensed with Windows 10 for that much. In 2005 on the eve of the Intel move, an RDP client for Mac didn't even exist. Today, it's practical to do an entire day of relatively involved computing using it. In 2005, cloud infrastructure tools like AWS and Azure barely existed at all. Today it wouldn't be impractical for some company to roll out "Virtual PC" as the name of a Windows 10 VPS you can lease to use remotely.

The biggest problems caused by moving the Mac away from Intel's CPUs are, my point is, solvable.

I'm excited. This is an exciting thing, and, despite being a Windows user who went so hard into using Windows that I've got an active directory server, exchange and SharePoint on my own server, I'm interested in what Apple is doing and having a modern Mac on hand for use.

I'm probably going to buy a new Mac with the Apple silicon in it, both because I live for trying out the new hotness and because I've wanted a new Mac anyway. I don't know enough to say I'm going to try to go back to the Mac as my daily driver, but, I have been using it a lot in the past couple months and am interested in giving it another go.

I'm going to wrap this up here, but I'm intentionally leaving a opening for a convenient second part of this where I address why I still believe most of what I wrote previously about people's reasons for the switch not making sense.

Jun 15
Hypothetical ARM Macs (or: WWDC Predictions)

So, in the past couple days, noise about the potential upcoming transition to ARM has been rising.

I've been trying to take some notes about it on my wiki page – that'll update over time, but you should be able to see the version history on it to see the page as it goes. As a warning – this page isn't organized.

I'm still skeptical, but I'm also aware enough to realize that I sound kind of like that guy who posted in great detail about how Apple would definitely never switch to Intel CPUs – on the eve of WWDC2005 when Apple announced the transition at WWDC 2005.

I'm not against the idea, but, it does grate to see people taking it as the god-given gospel that it will definitely happen. That has backfired before. Apple had an audio interface ready to go for Garage Band in 2004 or so and it got leaked hard enough that they canceled it outright. I don't think Apple will spite cancel a transition of a platform like this, but I am also going to claim that it's not off the table.

To be clear here, I continue to believe that Apple definitely has ARM-based Macs running in a lab. They've had that since at least 2015, if not 2010. I don't think there's any way around that. To learn that they haven't would be extremely surprising to me. It's just, Apple does almost everything in secret, so rather than showing off a 1u server with an ARM board in it running ARM builds of Windows and Office, with the old 7-era Aero styling on it, as Microsoft did in 2011, it's almost certainly under very tight covers. I strongly suspect we won't ever see the full extent of whatever it is Apple has been working on in regards to porting Mac OS X and their own apps.

The payout could be interesting though. If everybody's high high hopes were real, what would that mean? I've seen all manner of speculation from "Intel will stick around for a very long time" (like, longer than 68k did after the PPC transition) to "the whole product line will flip at once" (implying that while working on iOS and the new Mac Pro and the refreshed laptop keyboards, Apple also had the time and person-hours available to engineer an entire stack of ARM-based desktop/laptop computers, which would be a herculean feat for a company that failed to update OpenGL for almost twenty years before finally pulling it out of their products, and had to delay the shipment of an OS X version because the iOS version of the day was taking too long) and that the process would be "long, but not too long."

How you do this kind of transition is a tough balance. ARM CPUs might substantively change the platform. The Intel CPUs arguably did – all the software still ran, but everything that was native was 2x faster at the very least, from one model to the next. It was often more than a 2x liftup, especially at the middle where a lot of models went from single to dual cores. In 2006, Intel had CPUs ready for every member of the Mac product stack and the transition was finished in about a half a year. On the other hand, the PowerPC transition was much more gradual, with the first models being introduced at the top of the stack in 1994 and 68030 and 68040-based Macs persisting in the line-up until 1996.

The other part of the Intel changeover, and what makes it feel more substantive than the PowerPC one, is that for the first time the potential to boot a Mac directly into Windows or use virtualization tools to run it at full speed was present. In 2007, when Apple announced Boot Camp, the tool to do this, PC Magazine named the MacBook Pro the best extant PC laptop of the year, which is quite a statement.

Losing that functionality feels substantial, but for most people, it probably isn't. VMware or Parallels will produce an ARM desktop virtualization tool (if Apple doesn't publish their own) and developers will get cozy with the idea of remote access for at least a little bit.

Anyway – the thing that'll be interesting to see is what products come out of this, when they do, and what they're like. I find it tough to believe that Apple has chips ready to, say, replace the 16-inch MacBook Pro, but, who knows!

On, though, to my wishlist:

  • Tiny Mac laptop
  • Cheap Mac laptop
  • Cheap Mac mini
  • Cheap(er) iMac with an SSD
  • Mac OS X-based NAS or server appliance

Starting from the top, the most obvious first product is a 10-12-inch laptop. When Apple builds these, they gain incredibly dedicated followers and people who are willing to use them despite often having obvious and somewhat massive shortcomings. Often going as far as using them well beyond their best-by date or simply ignoring glaring flaws in order to have that level of portability. This dovetails with something I see people saying all the time: Apple focuses on thinness almost exclusively, to the point where its mainstream notebooks haven't shrunk in footprint since growing from 12/14-inches to 13-inches in 2006.

I'm in this camp, I'll be honest. If a revised small Mac laptop launched, I'd take a good long look at whether the Mac is a viable platform and where my Surface Go is, in terms of condition and functionality, and probably make the jump. I almost did a couple times when the 2015-16 12-inch MacBooks existed.

The next one is a meaningfully cost-reduced Mac laptop. Basically, any laptop that you can buy for like $599-799 or so. This could be sold in a size pair with the smaller, perhaps more premium machine, or it could be a separate product that maybe even gets a non-retina display in trade for being so small. Something like this would be incredibly popular in education and would probably do its fair share to knock prices off of the stock of existing used Intel-based Macs.

The Cheap Mac mini seems self-explanatory to me. If we're going by what iPhones and iPads cost, a Mac mini with 8GB of RAM, some kind of "it exists!" processor meant for Safari and iWork, and 256 gigs of storage, but with USB ports instead of a display, shouldn't be difficult to sell. $399 target price ideally for a basic 8/256 machine. Buy-ups available until you get to the price of the Intel-based Mac mini, and then there's decisions to make about which one is more appropriate for a particular situation, unless the mini flips over to being ARM-based in a single go. Bonus points, here, if the RAM is swappable, but I can also see Apple going for a Mac micro kind of scenario, reducing the mini's footprint even more, bundling it with a phone charger, eliminating the fan and advocating for it as, essentially, disposable.

The Cheaper iMac is, again, obvious. It's criminal that Apple's still shipping iMacs with spinning disks, let alone the 7th generation dual-core CPUs from the MacBook Air. 20-23-inch iMac with some kind of processor and an 8/256 configuration. Bonus points if the RAM is swappable. I vaguely suspect we won't see the 21.5-inch 1080p display return, but I think there's room here for a "cheap version with a ho-hum display". (Incidentally, the credible rumors at this point are that the iMac is going to see some kind of redesign, for the first time since 2009 (or 2013 when the machine became thin-at-the-edges) so I'd honestly be fine if 2020's cheap iMac is still Intel-based, so long as they advance it to SSD storage, which should allow them to get rid of the fairly sizeable bulge at the back of it, too.)

The server thing feels obvious to me too, this is pie in the sky wishful thinking and I know Apple won't do it – but the play here is something similar to the Mac mini or the micro Mac from above but with a quartet (or more, even!) of SATA connectors to do bulk storage for other Macs and iPhones/iPads to use, perhaps combined with some kind of minor revival of some of the functionality of Mac OS X Server, like choosing what volume home directories go on, and maybe even boosting up APFS with some new features similar to ZFS or ReFS. As a pie-in-the-sky kind of thing, Apple could add two network interfaces and give it network router functionality, but that's getting to the point of modifying the base platform pretty significantly.

For years, the overall play with ARM has been "Apple could pump an A-series full of wattage and it would perform great!" and I'm sure that's true, but I still feel like there's questions that have to be answered (even if that answer is "Apple builds it and ships it for a couple years) about what an Apple-designed desktop experience computer will be. Part of the problem in the PowerPC era, especially with the PowerMac G5, was Apple's insistence on designing the platform themselves ended up creating a less performant and reliable computer.

The other question I'll address here is about the overall transition strategy. Apple pre-announced the PowerPC and Intel transitions, and had prototypes for both, and at the dawn of the Intel transition, Apple announced a Pentium 4-based machine known as the Developer Transition Kit (commonly: DTK) they leased to developers up front to do work on, and then swapped that machine for a production iMac once the lease was up. They didn't have a developer kit per se for the PowerPC transition, but did have some documentation about getting 68k apps ready to run well on PPC.

The DTK was a pretty rough machine. It was an almost entirely stock Intel 945 motherboard placed inside a PowerMac G5 case, with some tight cable management and adapted to fit the case and the G5's power supply. Apple could have sold it to consumers, and I bet many would have purchased one excitedly, but it's notable for having been very un-Apple, overall.

It feels from my somewhat distant observation that there's pent up demand for a machine, and, people have kind of been speculating for years that one day an ARM based Mac would just drop from the sky, so if they were clear enough about what was happening (see also: Windows RT) Apple could hypothetically just announce the first customer-aimed Mac with ARM in it, with the caveat that it's essentially a SafariBook until more software gets ported. (Especially if there's no emulation or translation layer, similar to Rosetta.)

Hilariously, I don't even think that limitation would materially matter to most Mac users, because so much productivity has moved to the web. Granted, running a whole bunch of web apps, even in Safari, will absolutely kill whatever battery life advantage an ARM MacBook might presume to have. (So: a Mac mini/micro or an ARM-based iMac might actually be the best starting point for a retail machine.)

But, Apple has a playbook right behind them and it's most likely we'll see an announcement, a developer box not available for general sale, a new Intel-based iMac, maybe a fresh iMac Pro, and there's a distant possibility we'll see a refreshed Mac mini.

Jun 08
Vintage Computing Ecosystem

One of the more interesting problems to come up as I stay at home and putter (what with continued staying at home) is that I'm starting to get interested in pulling out vintage and project machines and poke at them. This week, it was the IBM ThinkPad X31 and the HP OmniBook 800. Both machines are relatively well configured, and in fine working order (save for batteries) so it was mostly a matter of saying hello.

An interesting issue came up though, especially as it concerns the X31. I lent it to my employer a few years ago for use with some specialized piece of equipment and then got it back and hadn't, since, bothered to clean it up or put my own stuff back on. (When it was 'parked' it had Windows 7 and Office 2010 on it, set up to sync my fd.stenoweb.net sharepoint document libraries, my personal OneNote file, and my stenoweb.net email, basically as a miniature organization/writing machine.) I couldn't find the file (although I didn't try very hard, admittedly) and my USB of Acronis True Image Home wouldn't run on that machine, so I set about trying to install other OSes on it.

The short version here is the only thing I happened to burn that was able to install was XP. I got Windows XP, Office 2003, Macromedia MX and eventually I'll put Adobe CS2 on it, perhaps a version of Visual Studio from the era, and then patch everything up as far as I can, and then…. I don't really know what I'll do then.

The issue, I think, is that Windows NT 4, which has drivers for this machine, doesn't have a good way on it to boot fully. If I've ever run it on here before, what I did was put the hard disk in another ThinkPad, and then move it back once all the driver files I want to use are in place, so I can get networking going, which with Windows NT 4 will need to be the primary way to get data in and out.

On Windows XP, it's still pretty easy to find out what the last versions of everything that works are, but with NT, when I get around to doing that (admittedly I'll probably end up starting on another machine for NT) it'll be more involved because I'll need to try to see what works or do research based on what I want. I've got a bunch of updates that I need to install on XP as well, and was able to use WSUS Offline to generate a package.

The two overarching problems here are:

  • I don't really care for Windows XP, so I do want to find something more "interesting" to put on this machine
  • My ecosystem for working on a lot of this stuff is a little lacking. I happen to have my T42p, which I'll need to use for this project, but otherwise it's just a matter of things working based on what hardware I have

More interestingly, perhaps, than the X31, is the OmniBook 800CT. That machine is a 166MHz Pentium MMX with 48 megs of RAM and an 800x600 display on its original install of Windows 95. The biggest problem it has, per se, is that it's kind of a top-heavy old install full of a previous owner's personal and work stuff, which is interesting but I feel uncomfortable leaving hanging around. The other issue I have with it is entirely aesthetic but I don't love the built-in little mouse. I can do file transfers using the PCMCIA slot and a compact flash card, or, I may be able to beam stuff over to the ThinkPad X24, but I need to think about what to do with it longer term. Serial mice (cute ones, even) aren't expensive, so I'll probably go ahead and get one. OmniBook accessories kind of are expensive though, and I'd want a cable for the diskette drive or a replacement diskette drive entirely, and, there's a CD-ROM drive available, plus I couldn't find the preload CD online, which means I need to look for that.

Depending, I'll probably put Windows 95 on it, unless it's known to run 3.1 or 3.11 well. (Per the manual, it does, so, it might just be a matter of whether or not 3.11 can run and use 48 megs of RAM and whether or not I can find the appropriate media.)

Though, and this is a discussion for a later date, I have some other ThinkPads that are prime candidates for Windows 3.1, as they're just about at the top of what's practical or reasonable for Windows 3/3.11 hardware.

The other thought this has me thinking though is what ecosystem support for all these different machines is or should be like. It's not hard to get things for some of them, but there's an aspect both of prioritization and what is or isn't high priority based on my budget and the other projects. (That prioritization is another project I need to work on at some point.)

I would like to get into some vintage Windows networking stuff. I could run a Windows 2003 or older server (heck, I think I can even join Windows 2003 to stenoweb\) on TECT and it should be possible to use PCMCIA or parallel port ethernet connectivity, which would save me from having to deal with diskettes or other removable external media. Both vtoolses (AppleShare IP 6.3.3 and Mac OS X Server 10.4) can speak SMB, so those might be worth trying, even if only for internal use, just to avoid having to run another vintage server, although, I have a copy of BackOffice Server 4 or 4.5 that might be fun to run, but, rather than picking up a big vintage server, I'll probably just run it on TECT.

The problem is this all takes time and space and I have lots of outstanding Mac projects as it stands, so, I'll see where I end up with all of this, but it has been fun to look at it.

I do kind of eventually want to end up with a machine that has OS/2 on it, but that's a scenario where I have utterly no context. Gravis wrote up an OS/2 guide, so I may start there on Warp 4 or 4.5 for the X31, since it should (as an IBM computer, in an era where IBM was still supporting OS/2) have drivers. I've also been thinking that the X31 might be a suitable machine for Haiku. I at least want to try.

May 25
Cloud and P2P Scams – Akash Edition

One of the weirder things I've gotten into over the past couple years, in terms of, just, being intensely interested when they show up, is Cloud and P2P scams. At the core of these things is typically some kind of reasonable sounding idea, or perhaps even some kind of compu-socialist or compu-communist truism I agree with – having to do with reducing costs or increasing access or whatever the case may be at the moment. However, (this is unrelated to the actual subject matter here and is more of a moderately relevant announcement) I've started to collect information about them on the p2p-cloud-scams page on my wiki, so, let me know if you have any for me to add.

I've been obsessed with some of the technical ideas behind a lot of this stuff for what I feel is fair to characterize as nearly ever. I'm basically a socialist or communist at heart and one of the ways that shows is that I've always wanted to be able to resell computing resources, either because I errantly believed that something I had would at some point be uncommon, but still needed (there's a little bit of vintage collector brainworm stuff going on there) or it would be good, fun, neat, or charitable to share resources I have which are surplus to my needs with other people who, either don't care to run those things themselves or can't, for whatever reason.

I already do a little bit of this, in terms of TECT, but that's decidedly imperfect in a lot of ways, not least of which is because, as an enterprise or business tool, TECT and its software loadout are designed, more or less, to enable boss supervision, so it's possible for me to do things like examine mailbox and home folder contents of users on the system. (Even if I don't happen to be doing that.) Today, the things I'd like to see mostly center around building some kind of informal peer to peer file sync system that lets people allow others to sync onto their nodes.

The problem is that this is all built with the idea of monetizing it, and so you end up getting into the realm of things like weird crypto scams, needing an unreasonable amount of resources to dedicate to the task, and issues like monetizing consumer-grade raspberry pi appliances that sit in houses involves requiring a certain amount of internet throughput to sell storage to businesses or other people.

At any rate – the most recent one of these I've discovered is Akash. Akash adds a new element and combines all three of these elements: Compute, storage, and crypto. It appears that Akash has nothing today, unless there's a software that runs on normal PCs or you can install it as a hypervisor. They've got an incoming appliance (which is admittedly the part that caught my attention), lots of over-the-top claims about personal supercomputer, their own cryptocurrency, and promises of becoming rich and unseating the likes of AWS and Azure.

That appliance, is interesting, but weird. About a thousand dollars gets you a bunch of ARM cores, some GeForce cores, some tensor cores, 24 gigs of RAM, and 544 gigs of eMMC storage. A thousand dollars seems like it would get you some better storage, and, it's an interesting configuration overall. 24 gigs of RAM gets you room to run a couple more, say, Linux/BSD VMs than your average home computer running VMware might get you. It seems like a really top-heavy configuration.

The appliance, which is called "Supermini" is advertised both as a personal supercomputer and as a way to passively generate income, which I'll admit being interested in. I'm curious as to what the actual functionality is here. Is it for developers and engineers and scientists looking for a cheap box that runs Kubernetes to offload compilation and long-running tasks to, or is it explicitly with intent to resell as a cloud service.

The other question is that the device is also advertised as portable, which, would be great if you were, IDK, a developer using it for builds or someone using it to Do Science alongside a laptop that might not be well suited to that task, such as an older machine, a MacBook Air, or a Surface/Go. Though, if you do that, you are presumably somehow disqualified from using at least that particular unit for selling resources to other people.

If you leave your unit at home, presumably you can do both things, and you might even be able to access more than just your particular unit if ou have a particularly big need, albeit you'll probably end up being charged in their cryptocurrency.

On the face of it, it sounds like a good idea. Or, at least, I feel like I'd be enthusiastic for the potential of the platform, even though I doubt it can unseat AWS and Azure.

There's two things that give me concern, overall. The first is that, to be perfectly honest, I don't think that this concept will actually work. Home users don't have good Internet connections, and most ISPs explicitly forbid running services out of homes, and that would at least implicitly imply using one of these for its passive income generation potential, whatever it is in particular that entails.

To be honest, I suspect that part about half of it is that the box will be doing crypto, which, depending on the power supply might be "fine" if annoying, and part of it is going to be selling resources to other people. The problem is that you won't be able to use these to host web sites, so you're either getting other people's science or it's similar to one of the other "earn money on your computer!" scams where what's happening is you're running a VPN service. The next possibility here is that you're generating crypto, either for other users or for the company running everything, and, this entire thing is just an easy way to scam people into doing it for you.

The worst part here is that there's some kind of alternate universe where this might actually work. In the 1980s as part of being deregulaged, The Bells collectively promised universal 45/45 broadband, which, of course, never really happened. If that had become a thing, these kinds of things where you can buy a box or participate in a community or peer-to-peer network.

Edge computing, which is the new hot term, has some of this kind of thing going for it, but that's all aimed at corporate buyers, telcos, and cloud vendors or software service providers who want to sell, say, data collection appliances or hardened boxes to do industrial controls that need some amount of local compute. (I've written here before, for example, about how I'd kind of like to have an on-site caching appliance for services from Google/Apple/Microsoft, for consumer or small office use.)

So, I'm going to keep up with Akash but I don't have a thousand dollars on hand to buy one and I strongly believe that anybody buying one today strongly runs the risk of buying into something that, absolute worst case scenario, will end up being a really swole brick. Best case scenario, when Akash shuts down, you'll be able to reformat the box, or, your local credentials to it will work, and it won't be tied entirely to their online infrastructure.

Before I close these thoughts out, I want to compare Akash to another cloud-at-home type of product, Antsle. I wrote about Antsle a couple of years ago, here. I think Antsle is a bad product with marketing that doesn't understand its users very well, and, to be honest I mostly still do. But, critically, I do not see Antsle as a scam. There's no P2P involved, there's no crypto,, the machine is based on standard hardware and believable real-world specs, and, Antsle is very clear about what it is. It's a physical appliance based around an extant piece of software, the antman virtual machine management tool.

So, there it is. I might cover this again later, but that's where it is for now. To add to all of this, at the moment, the actual physical product is vapor at the moment, too.

May 11
Physicalization

With the recent rearrangement I've been doing of some of my web services, for purposes of certificate management, I've started to think about some other possible arrangements of services.

One of the biggest weaknesses of TECT-the-hardware at the moment with increasing numbers of workloads running on it, and possibly an error or mis-configuration with the RAID controller (I'm thinking the battery might need to be replaced) IOPS are moderately difficult to come by.

This is "fine" because most of my workloads are modern enough that they can wait a couple seconds, but the place it becomes most noticeable is with my SharePoint server, particularly the one I'm using on the regular, maron, which runs an internal site, maron, new, doku, and will probably run the incoming fresh personal (static HTML store) site, which hasn't come back online yet.

Because everything is on VMs, it would be easy enough to move the VM to better storage (an SSD installed somewhere, for example) or to another machine entirely.

I workshopped this idea briefly in a previous post, but basically migrating the workloads and services or the VM to a new machine entirely will alleviate this issue and may be a way to keep them running while I do work on TECT.

We'll see what ends up happening – which is mostly to say, what I end up doing. The SharePoint server is the most complicated workload and it's the one that's most complicated to migrate. Not impossible, but it would benefit me to keep it in the virtual machine, which is honestly the thing that's most complicated about moving SharePoint around. I'd need to reduce the RAM on the virtual machine by about half from 12 to 6 or 7 gigs, depending on what it ended up running on.

The problem is that I don't want to split everything up so much that I need to run so many machines. The next smallest machine from TECT is finnmark, the twin to my current desktop, which has 16 gigabytes of RAM, which would mean I could bring maron and probably one or two other small things to it. (Incidentally, that machine ran the old TECT VM for a while, and that's probably the thing that would be best to put on it, since if TECT has much less RAM than it currently does, it'll slow things down pretty terribly.)

The next smallest machines I have are the Mac minis and the remainder of the mini PCs, a Lenovo ThinkCentre M92p and a Dell OptiPlex 9020M. The problem with those is that neither of them has RAM or storage available at the moment. In addition, I'd been thinking about saving the 9020M for myself as a new desktop, since it's much newer than most of my other desktop machines, and my need for gaming graphics is significantly reduced these days.

The mini, I'm in a bit of a different situation with. It no longer officially runs the latest Mac OS, although it would run a patched installation just fine. It's just on the edge of what I consider vintage OS X to be and I have some hypothetical uses for some faster Macs in that realm. Ironically, it's the same age as my current main desktop, and I know it would run Windows, Hyper-V or Windows Server competently, although it would need the same RAM upgrade the other mini desktop PCs would to virtualize SharePoint onto.

Generally speaking, the potential uses for the mini are more or less, become a vintage personal-use desktop and support/admin machine for vtools, host some of my side-sites, taking some pressure off of the existing maron server, or perhaps receiving a new installation of Mac OS and working as a side-task box, for things that aren't quite server and aren't quite desktop duties.

I'm moderately tempted to re-do its present installation of Mac OS X 10.13 Server and using it for the static and PHP web sites, in (perhaps) addition to the very legacy (and realistically, internal-only) MAMP sites I still have on hand and really need to get information out of. (At the very least, running those MAMP sites is a reason to keep that machine running 10.13 or older, I suspect that it would run on 10.6 as well, if I needed, but I haven't had an opportunity to try.

The main limitation on any of this is that I arguably need more RAM and storage for all of the machines in question here. The 2012 mini, which is my modern Mac tourism desktop, has just 4 gigs of the stuff, and an SSD older than it is.

My initial plan there was to order the RAM to bring both minis up to 16GB, their max, and dual disk kits for both of them, so I could do dual storage devices on them. The biggest problem with that is that it's quite expensive to do it for both minis. Doing it for the M92p and 9020 would be that much more expensive. I'd say it would be about a thousand dollars to max out the two minis and the M92 and the 9020M. TO add to all of this, the M92 is unique in the group for not supporting any form of dual storage (that I happen to know of at the moment) so if it gets a boot SSD, it'll need a big one, or it won't be able to assigned more than one job, need external storage, or some combination therein. In addition, while the 9020 does support dual storage, the first storage needs to be (better, but more expensive) NVMe storage, which is fine but expensive.

I have a disk and one stick of RAM to put in either the M92p or the 9020, but the 9020 also needs a new storage device.

The bummer about all this is as server workloads are concerned, really, a thousand dollars gets you most of the way to a thoroughly respectable configuration on something like a PowerEdge T40, T140 or the new HPE MicroServer G10+, which is, itself, most of what would be needed to replace TECT outright.

This is a tough spot because I don't strictly speaking think TECT needs to be replaced, but deciding whether to spend money on upgrades for it or a side-box to take on some of the heavier work is tough. I haven't priced out what the upgrades for TECT would really cost, but I suspect it would be, in all, also round-about a thousand dollars to max the RAM, drop in another RAID card, perhaps install a second power supply, replace the old 2TB disks with faster and better disks, perhaps with higher capacity, and perhaps rearrange things into groups of RAID 1s or a big RAID10, in addition to putting a couple SSDs in.

That doesn't address TECT's backup situation, and admittedly physicalizing a couple of the workloads would improve that a little bit by way of splitting things into workloads that can be backed up onto smaller, cheaper disks. This should be possible with TECT, and I'd need to do it if/when I upgrade TECT itself.

Of course, with lots of this stuff, I don't need to spend an entire thousand dollars at once. (In fact, it would be a terrible idea to do so.) I could, say, spend a couple hundred on the upgrades needed for the M93p, which would mostly be 16GB of RAM, a 1TB SSD and a 2TB pocket hard disk for backups and move maron and, say, the VM hosting the fd SharePoint 2010 site, the web proxy, or perhaps both of those.

Then, I could upgrade the 2012 Mac mini next, under the auspices of using it as a primary or secondary desktop (which, long story, but, I kind of have) and then either the 2011 mini or the 9020M, depending on what stuff needs to happen. (For example, one of those machines might be great for running the new Exchange server, but that's a different story.)

 

In Pandemic news – mostly the same. I had a minor blowout with my HOA. It was my fault but it's still annoying regardless. After having been tagged for removal a couple weeks ago because I was using a wrong parking spot, I was tagged for removal under the rule our board has about abandoned vehicle. My car was tagged because it displays an expired registration tag, even though its registration is, in fact, not expired.

I discussed with the tow truck driver who put it on, who more or less explained that the matter wasn't for him to decide. I called, texted, and emailed the property manager in frustration and got a response that basically said something to the effect of "sorry, them's the breaks, you could park on the city street while you wait for your new tag to show up" and then some further explanation about how this was done at the direction board, which pretty heavily implies that this is in the hands of the people who live near me, who either want my car gone so they can have its parking spot, or are, for better or worse, looking for something to do.

I unhappily wrote a reply to the property manager but hadn't sent it. I'm going to clean it up and perhaps send it again later.

Again, this is my fault, and not knowing the rules doesn't make you innocent from breaking them, but I do think these are bad rules. The trouble is, firstly, I don't have the time or energy to, myself become an HOA board member and don't strictly speaking know what a better writing of this rule should be. My original suggestion was to check registrations with ADOT, but I don't know what that would entail.

The biggest question is what enforcement is like and why enforcement happens to conveniently seem way more strict now that, you know, there's a global pandemic on and people are at home more. Of course, perhaps the better question is why the board cares why a car is or isn't registered if it has a parking permit, and why some of the other problems we're having, such as illegal use of the handicap spot, aren't being enforced.

To be honest, I think they should just go ahead and assign spots. It would probably solve lots of problems we have in winter and reduce tensions with regard to people taking certain spots.

Reallocating the complex's singular handicap spot away from our particular little corner of the lot might also help, and adding some more spots. This area is particular troublesome because at least one of the units next to us has three or four cars in total. They aren't always using more spots than they're allocated, but things still get awkward as people try to get spots closest to their house. Because of the organization of things, that means the unit at the end gets the two spots in front of my house, or one of them, my housemate and I get the next two spots, and the second car from the end unit gets put in a random spot.

Really, assigning spots would alleviate a lot of trouble, except for the other problem I have, which is that my neighbors are often not good at telling their guests to use the guest parking.

1 - 10Next
About this blog
Latest in a line of blogs.
Tech,  life, instrastructure, among other things.