75 Comments
Mar 26, 2023·edited Mar 26, 2023Liked by Alberto Romero

I haven't seen much discussion about the general public's reaction to AI. Currently AI awareness still seems restricted to a very small subgroup of people.

Whenever I talk to others about AI, even many tech workers (!!!), 95% seem utterly unenthusiastic and never demonstrate a single ounce of curiosity about the AI's capabilities or what it may bring. Managers and executives, the more senior they are, actually demonstrate much more interest into AI.

This just feels utterly worrying. How can people see a tsunami coming and don't care? It feels like these people, are just going to put their heads in the sand. And when GPT5 or GPT6 rolls over, they will confront these AIs with 0 preparation or time to cope, and cause mass social turmoil/roits.

Any plans to write more on the public reaction to AI? And how the AI community should communicate to those who are not in the know.

Expand full comment
author

That's a very interesting topic. I've thought about this a lot. In my experience, most people are deeply unaware of AI (the majority only know about ChatGPT if anything). I think it's quite hard to grasp the possibly unprecedented impact that AI will have unless you understand the history of the field and the general goals of AI researchers and companies. I don't think it's that they "don't care" but that they don't have the right priors and right criteria to evaluate the importance of what we're living.

An article on this would certainly be valuable but I think there's a paradox here: those who could get the most out of it will, by definition, not find it. I see some real value in providing resources for people like you to teach others about all this, though. In that sense, it's worth it.

Expand full comment

Great question artworm.

One way to understand AI as a cultural phenomena is to look at our relationship with nuclear weapons. As a society, we seem to have had no trouble at all almost completely ignoring the single biggest most imminent threat to the modern world, that is happening RIGHT NOW, not maybe someday like with AI. As example, I've yet to see any AI commentary anywhere that shows an understanding that the future of AI will be decided by what happens with nukes.

It's not just the AI community that's deep in denial. I've spent years now trying to engage academic philosophers on nuclear weapons, and they couldn't be less interested. And these are people with PhDs who are supposedly expert at critical thinking. I'm not making this up, and am at a loss to explain it.

Young people have decided climate change is a huge threat, which is great, but they too seem to have little to no interest in the more imminent threat presented by nukes.

It's observing these kinds of blatant disregard of logic denial phenomena in the realm of nuclear weapons that has made me skeptical that we're anywhere near ready for other revolutionary emerging technologies like AI and genetic engineering.

Expand full comment
Mar 27, 2023Liked by Alberto Romero

What are your thoughts on the "Sparks of Consciousness" comments?

Expand full comment
author

Do you mean the sparks of AGI paper on GPT-4??

Expand full comment

Yes, that might inform the conversation, but I'm also thinking more about the commentary from from David Chalmers: https://twitter.com/davidchalmers42

Expand full comment
author

Not sure what tweet you're referring to! But about sparks of AGI and consciousness, you can read this, which I think perfectly frames my stance: https://thealgorithmicbridge.substack.com/p/schrodingers-agi

Expand full comment
Mar 27, 2023Liked by Alberto Romero

Seconded! Would love to hear your view on this.

Expand full comment

My hearty congratulations on this splendid achievement, Alberto!

Here's my question: how have you integrated the use of AI-based applications and products into your day to day routine? Either for personal requirements or work outcomes. Curious to know!

Expand full comment
author

Thanks Punit, good question! I have to confess that I use neither ChatGPT/GPT-4 (nor AI art models for that matter) for anything other than the cover image on my blog posts. I've experimented a lot: summarization, ideation, brainstorming, co-writing, exploration of the models, etc. but non of those use cases have managed to remain in my routine/workflow.

In the case of practical uses (e.g., using GPT-4 to write a first draft) I've found that they don't fit my way of doing things. for instance, my first drafts come directly from my mind. They exist either because I want to express something or because I want to learn something—using ChatGPT for that takes that value from the process in both cases. I see the point of doing it if the main purpose of writing is to create a piece of content for others to consume. But I don't write just to create more content. I find that idea... saddening.

On the theoretical/research side (e.g., using ChatGPT to learn about ChatGPT) I've found that others have put on much more time than me and that reading their insights is incomparably more valuable. I still believe that practicing first-hand is irreplaceable so I do that quite a lot (and would like to do it even more in the future).

I don't discard using them more in the future, though. I have nothing against them, they can be extremely useful if used correctly.

Expand full comment

Question about What do you think Microsoft and OpenAi’s strategies behind launching both Bing and ChatGPT + browsing. Because to me the with browsing I feel I don’t need use bing anymore. I’m really curious about it

Expand full comment
author

Are you asking what's the point of integrating Bing with GPT-4 when you can just use ChatGPT and the browser separately?

Expand full comment

Yeah basically

Expand full comment
author

I'd say there are two reasons here:

First, OpenAI and Microsoft are different companies (even if under a tight business relationship now) and they have different revenue sources. Microsoft's Bing works with ads and ChatGPT works with subscriptions (plus and GPT-4) and also the API which works with a pay-as-you-use model.

Second, I think trying to combine LMs with search engines is a reasonable quest if you want to enhance both by mixing their strengths and reducing their weaknesses (I don't think Microsoft has achieved that, though).

Expand full comment

Thanks for answering that. I tested the ChatGPT browsing for a shopping request the first day they announced and gave access to the plus users. And I quickly realized It’s going to be a such easy moment for OpenAI to wave the ads into the ChatGPT generated answers for promoting the brands( or at least the none plus users in the future if they opened up browsing for everyone)

I wonder if they will ever serve ads into ChatGPT like what Microsoft did with the Bing. I’d be both surprised and not surprised if they did.

I’m surprised because Microsoft didn’t not put that into agreement with OpenAI for none compete clause. Or maybe Microsoft is betting on ChatGPT just in case Bing is not successful as ChatGPT.

I’m not surprised if OpenAI is going to serve ads into none plus users with ChatGPT because it just feel like no brainer as an investor Point of view. Especially as the mass population is transferring from search to chatbot for getting answers. The brands need a new ways to put their brand in front of audience. And ChatGPT has massive active users daily.

Expand full comment

How long do you think it will take for more commercial uses of generative AI to take hold (i.e., big companies incorporating it into everyday processes)?

Expand full comment
author

It's already happening! Some of the largest tech companies in the world have integrated generative AI features into their product suites, e.g., Google: https://workspace.google.com/blog/product-announcements/generative-ai and Microsoft: https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/

Expand full comment

Thanks Alberto - what about the non-tech companies like industrials, consultants, banks, etc. It seems there are applications everywhere for internal processes, but curious how long before it’s a mainstream tool across industries? (And congrats on your milestone!!)

Expand full comment
author

Morgan Stanley is another example: https://openai.com/customer-stories/morgan-stanley

There's sort of a widespread embracing of generative AI across industries. What I believe though, is that the GPT-4 kind of approach won't be widely used. Instead, I see different companies/sectors optimizing customized and highly specific smaller models for their use cases rather than using a super large, costly, and generalist model for everything (more Stability style than OpenAI style)

Expand full comment
Mar 27, 2023Liked by Alberto Romero

Just today there was an article about Levi's using generative AI as a partial substitute for human models https://mashable.com/article/levi-strauss-lalaland-ai-models

Expand full comment
Mar 27, 2023Liked by Alberto Romero

I would love to incorporate AI as a kind of “research assistant,” but so far it’s been disappointing at least with ChatGPT. Maybe I am using it wrong, but the responses are so general, often unreliable, and unimaginatively repetitive. Not very useful for the social sciences or as a writer more generally. Any advice? Or should I just wait until the tools improve to even try incorporating this?

An all-purpose researching bot is really what I have been dreaming of - given I have so many notes and ideas, but not enough time to parse through the web searching for source material. This would really be that 0 to 1 moment for nonfiction writers. So far though, it’s not working out for me.

Expand full comment
author

There's a big problem with this. ChatGPT, GPT-4, Bard, Claude... All have it: they're not designed to ascribe a value of truthfulness to the output they generate. This means that what they generate isn't measured by wrongness or rightness, it simply is human-sounding.

A different system that made up stuff from time to time but had a measure of how wrong the generation is, wouldn't be as problematic as GPT-like systems. Current transformer-based autoregressive language models can't do that.

In short, using them for research is asking for trouble. Even if you're an expert in the topic you're researching, I wouldn't encourage this use case.

Expand full comment

I had some better luck when I refused the answers and insisted on more detail/precision/insight. This led to some surprising success. But it only was verifiable since I knew the answer. Also, it left me wonder about "human support in the background"--no way to tell. It is frustrating to have to push the system to go further, particularly since the answers can't be relied upon to be valid.

Expand full comment
Mar 26, 2023Liked by Alberto Romero

A colleague who is a paid subscriber gifted me a subscription to sharpen my musings on AI. Say, what's your stance on the latest efforts to regulate AI? I am not even sure which part of the world you are inhabiting. And, out of curiosity, what drove you to dedicating your writing and thinking to AI?

Expand full comment
author

Hi Silke, congratulations on the Heise scoop on GPT-4, that one was a total hit! And thanks for being here. I'm Spanish, so we're actually quite close.

What I can say about regulations both in Europe and the US is that it's coming too slowly. The pace of progress in AI—both apparent and real—is too fast. Regulation is always slow but in this case, slowness may entail a high cost. A couple more things: policymakers should make the effort to understand how AI systems work, it's not always trivial, and for that, they should engage with experts both from industry and academia. Also, they shouldn't act like with social media: letting companies self-regulate isn't a good idea IMO.

About the second question, the thing is I actually started writing publicly about different topics back in 2020. I settled with AI because of my background and the growing interest and performance of the systems (this was early 2021, post-GPT-3). However, I tend to write about something else using AI as starting point. Almost none of my articles are technical or exclusively about tech because I'm interested in many other things (e.g., philosophy, politics, and culture...). Another reason I chose AI is that I saw almost no one writing about this from the same perspectives and exploring the same combinations of areas as I was.

Expand full comment
Mar 26, 2023Liked by Alberto Romero

So cool to see you hit this milestone - congratulations Alberto!

As a curious fellow Substacker, how did you see your subscriber growth curve evolve? Was it a gradual / linear growth or did you see some occasional big jumps, followed by relative plateaus? What in your experience were the types of posts that tended to bring in the most subscribers? Were they detailed opinion pieces that you're famous for, or were they more practical posts with advice, tips, and insights? Do particular channels seem to do better than others (e.g. organic vs social)?

Any interesting observations you may have made about how Substack has worked for you would be awesome to hear about!

And again - huge congrats! Go out and celebrate!

Expand full comment
author

Thanks a lot Daniel!

I saw a bit of both. Some jumps with specific events like my viral article on Stable Diffusion or when Substack featured my newsletter and also linear growth over the weeks (not really any plateaus). The posts that worked best did so due to a combination of things (from what I can tell): timely topics, controversial arguments, attractive headlines, short reading times, etc. It's really hard to predict virality but there are a few things you can do to improve your chances. Also, I've been using other platforms to drive traffic (although Substack does a good job with discoverability now): Twitter, Reddit, and Medium mainly.

Probably the most important thing to know about Substack is that it's better to start with an audience (I started with 700 emails in June 2022) and niche topics work best. If you have that, there's no better place on the internet to write.

Expand full comment

A conspiracy theory that I find amusing is that OpenAI keeps releasing so many groundbreaking products because they have already built GPT-5 and its doing all of the coding for them!

But I think there is something about having foundational models and being able to layer on top of them to accelerate the pace of progress. Do you have any insights into why OpenAI is able to ship so much, so fast?

Expand full comment
author

I'd say the reality is OpenAI has had GPT-4 ready for a long time now. They finished training in August and were able to develop a lot of things that didn't ship for months. They might be already finishing training GPT-5, who knows (I'm not sure about it coding the next GPTs. It's probably helping, though--it'd be quite telling if OpenAI devs weren't using their own product).

Expand full comment

Yup, or at least are quite advanced in training GPT-5 right now. It's interesting to listen to Sam Altman on the Lex Friedman podcast. There's an interval of closed testing with partners and safety/ fine-tuning after the pre-training. GPT-4 seems to have a different architecture than their GPT-3.x branch. There is one half sentence where Sam starts to say something about architecture and interrupts himself in the middle of the word. Interesting question whether they are using their own product for coding. Would it make a difference?

Expand full comment

Sparks of AGI: looking forward to read your comments in due course on that article. Seems every time I blink there is a fundamental shift (or not?)...

Expand full comment
author

I assume you got my answer from the Schrodinger's AGI article... (I'm revisiting this because there were too many comments!)

Expand full comment
Mar 27, 2023Liked by Alberto Romero

Hi Alberto, thanks :) also for answering all three of my questions. Your name reminded me of John Romero, who's from the US, now living in Ireland.

We are on the same page: Policymakers should make the effort to understand how AI systems work. And they should understand them before they regulate. The dilemma is the acceleration of AI progress, driving policymakers into rushing decisions (because regulation is obviously needed – and lagging). Companies with deep pockets have other means to fulfil compliance requirements. Our European economy is small-scale/ scattered and will be affected in different ways. A difficult task for regulators. Self-regulation of platforms doesn't work, as we have seen.

Your polymath approach is highly appreciated, this is why I am here. As a society, as human beings, we need more meta-reflection like yours to fully grasp and actively shape the ongoing disruption. We have some things in common, perhaps. My background is in culture and history. I assume yours is in tech?

Looking forward to forthcoming episodes of The Algorithmic Bridge! ✍

Expand full comment
author

I'd say the main problem is that there's no agreement among AI experts at all on which are the most urgent aspects of AI that policymakers should regulate, as evidenced by the open letter and the reactions to it (some of that is reflected on the Schrodinger's AGI article, which I wrote before the letter was officially published. I focused on AGI but the points extend beyond that specific topic).

"Your polymath approach is highly appreciated, this is why I am here." Thanks a lot! I have a mixed background actually. I studied aerospace engineering and then a master's in cognitive neuroscience (none of those taught me anything about AI or writing, though, that was all self-taught!)

Expand full comment

Thanks for elaborating! I share your concern. Disagreement and scientific debate are healthy - if there is time for a process to reach common ground and a sound compromise. Currently, there is this impression of time pressure. And a lot of confusion in circulating opinions. I wonder how policymakers should figure out whom to trust. Ad hoc measures built on shaky ground may yield unhelpful results.

So, the term "polymath" suits you well :)

Expand full comment
Mar 27, 2023Liked by Alberto Romero

Hi

Congrats on achieving this milestone!

And thanks for having this AMA as well

There had been news that Bard’s model is only 770m parameters and GPT4 is about 1 trillion. Is there any truth to that in your opinion?

Also, I am curious are there any other interesting or promising AI models on the horizon that’s worth looking at other than transformer-based?

Lastly, what do you think of forward forward algorithm vs backpropagation?

Thanks

Expand full comment
author

Quite possible, actually. Google said Bard would be first rolled out with a lightweight version of LaMDA (although 770M sounds extremely tiny). The 1T number for GPT-4 has been running around. I've also seen some estimates (calculated by prompting the model to count to a large number and comparing that to how much it takes for GPT-3.5 to do the same--not super reliable) that put the number around 2-3T, IIRC.

About non-transformers, I recently saw a paper on RNNs (hadn't heard about that acronym in years) that showed promising results: https://johanwind.github.io/2023/03/23/rwkv_overview.html

About the forward forward algorithm, I think it's too early to compare it to backprop. But I don't think backprop is really how human brains learn, so we may have to go a different direction eventually. Maybe not.

Expand full comment
Mar 27, 2023Liked by Alberto Romero

I feel like there is a wave of generative AI start-ups that kind of resembles what happened with the web3 (especially NFTs), but I'm unsure of how well these companies will survive. What do you feel are the best jobs (and companies, or types of company) for business students looking to work on building products that integrate generative AI, and that are likely to be around for a while ?

Expand full comment
author

I think the current boom resembles more the dot-com boom than NFTs (which are useless). I don't think most startups will still exist in a few years—I'd even say most founders know it and are simply trying to make as much money as they can before the time comes.

It's really hard to know which ones are true and will resist the lows unless you know the founders. That's, I'd say, the best approach. Getting to know not the product (most look the same), not the pitches, not the use case, but the people behind all that.

Another option is to become a founder yourself!

Expand full comment

You write, "I'd even say most founders know it and are simply trying to make as much money as they can before the time comes."

Yes, that's the way it works. Get in early, build like crazy, and cash out before the latest craze is over.

Expand full comment

As the pace of the knowledge explosion continues to accelerate, no technology will be around for awhile. However, any skills that relate directly to human beings should be more stable, like business or sales.

Expand full comment
Mar 27, 2023Liked by Alberto Romero

Hi there,

I'm (day)dreaming of a path to AGI where LLMs would be used on COMPLETE multi-modalities i.e. not only text or images but according to our own human modalities, i.e. our 5 senses + proprioception.

This would bridge the gap from (autistic) pure language "grasp" of "reality" and our grounded or embodied cognition. Then when the computer would encounter words like "I grasp", "I'm hearing you" "I can smell it" etc. it would have a better "grasp" of a commensurable "reality" with us.

Now there is also progress to be made on causality identification and (human like) reasoning.

Note also that super intelligence wouldn't be limited to our 5 senses but could embrace all the electromagnetic spectrum to benefit from other senses....

We are still so far away from mapping the brain various channels of communication (more than 40 neurotransmitters, 3 or more concurrent spiking waves activation schemes, contribution to (continuous) learning and memory of the dendritic trees formed by all the synapses, etc) that I'm often wondering how we could have already done so far with our poor brain ersatzes that are current ANNs)

Anyway, keep up the good work here, I'm really enjoying this newsletter, we are witnessing an interesting time indeed.

Expand full comment
author

I also believe AGI will require much more than LLMs. Things like embodiment and multimodality seem paramount (although it's always possible to find a different path to general intelligence)

Expand full comment
Mar 27, 2023Liked by Alberto Romero

Congratulations and thanks Alberto. It is nice to have a forum with other people excited about generative AI! There are so many tools out there that it is somewhat dizzying. The newest one that I have discovered is Lex.page that now incorporates GPT-4. I signed up and am on the waiting list. It is a word processor with AI incorporated that performs like a co-writer. I think that Microsoft is going to do the same with their co-pilot for Word. Let’s see what opens up first…Cheers

Expand full comment
author

Yes, Lex is nice, it came out a few months ago and has been improving a lot since. It's quite useful as a Google docs "replacement" (now Google and Microsoft have integrated gen AI into their products, though)

Expand full comment
Mar 26, 2023·edited Mar 26, 2023Liked by Alberto Romero

I much enjoyed your article on the AlphaCode paper. My impression after reading the AlphaCode paper (and the appendix) is that the "slow positives" have not been dealt with (i.e. were ignored in the evaluation, I could be wrong but it seems the focus was on reducing wrong answers). Slow positives are defined in that paper as solutions that are too slow, with the wrong (= inefficient) O-classification etc. CodeForces does not include a condition or ranking on speed of solutions in the rules. I wonder if you have insight on what the results would be in case efficiency would be a factor? My impression from the paper is that the results would not be near that of a typical programmer. For a system to be competitive with programmers one needs the solutions not only to be correct but (reasonably) fast as well. My understanding of current systems is that they cannot handle the building of efficient code. In their current incarnation, they seem to be restricted to supportive roles for the programmer. Even that role is necessarily limited, as the suggestion of inefficient solutions typically means a lot of thought needs to be put into improving the suggested solution. What are your thoughts on this aspect? I.e. the capacity of current systems to build efficient code?

Expand full comment
author

"In their current incarnation, they seem to be restricted to supportive roles for the programmer." This is the key sentence here. I agree. And I think that's the intention. The truth is AlphaCode is hardly comparable to ChatGPT or Codex. They embody very different approaches to the problem of coding. Also, it's very different to make an AI system that can do competitive programming like CodeForces challenges than to make one that" just" writes code from text instructions. There are some similarities, though, e.g., both are based on transformers.

Expand full comment

I should have been a little more specific: the capacity of current system to build efficient code, and AlphaCode in particular.

Expand full comment
Mar 26, 2023Liked by Alberto Romero

So far it seems to me that the application that *most* people are actually using these programs for in the real world right now is to generate first drafts of texts dealing with a huge range of subjects. So it looks like the difference this technology will make for the culture as a whole will have very little to do with productivity as classically defined, but will lead to an incremental but real increase in the intelligence of everything everybody writes. (Not that this is a small change.) Outside of a few niche cases in applied graphics, porn being perhaps the one with the largest number of jobs and money at risk, not many jobs are going to be lost or even changed drastically. What am I missing?

Expand full comment
Mar 28, 2023·edited Mar 28, 2023Liked by Alberto Romero

I believe productivity is realised in terms of coding support for more routine tasks. Same in education. I had to bring assignments in programming in the classroom until I figure out a way to deal with the fact that students get a pass via ChatGPT *but*on the flip side I give test problems in basic programming to ChatGPT and it cuts down time by providing solutions, context and examples which I only need to check, saving time and giving room to experiment with alternatives. A plus on the productivity side for routine stuff.

Expand full comment
author

For routine stuff that you know about, right.

Expand full comment

Good to know. Can I show your message around?

Expand full comment

What you're missing?

1) Intelligence doesn't arise from the technology being used, but from the life experience of the user.

2) The more insightful an article, the smaller the audience.

Expand full comment

I am not sure I understand your points. Here is a link to a NYTimes story about one example of what I was trying to talk about.

https://www.nytimes.com/2023/03/24/style/ai-chatgpt-advice-relationships.html?searchResultPosition=5

I imagine this dynamic played out over the entire society and that this application will be the major use of the technology. You don't??

Expand full comment

Hi Fred, I can't access the article at the moment. I am considering a NYT subscription though. Your advice on that is welcomed.

I just meant that technology won't make us more intelligent. You know, being on Substack doesn't make me more intelligent. Technology can make us more informed and more prolific, if that's what you mean.

If technology did make us more intelligent, that's a double edge sword as writers. Generally speaking, mass audiences don't want intelligence, they want entertainment. The most popular topic on Substack appears to be politics, where everyone yells about how they are superior to somebody else.

Expand full comment

Perhaps it will also make some of us more stupid. I myself think the balance will be in favor of the technology but I admit there are counter factors. Can we say that the printing press has on balance made humanity in general more intelligent? Radio? The internal combustion engine? The internet??

Expand full comment

This may revolve around definitions of intelligence. Certainly communication technology makes us more educated. More educated doesn't automatically equal more intelligent.

Example: I've been trying to engage academic philosophers on the subject of nuclear weapons for years now. Complete waste of time. Academic philosophers are clearly well educated, and their speciality is critical thinking. But they can't critically think their way to an interest in the single biggest more imminent threat to the modern world. Educated to the max. But intelligent?

Expand full comment

>> More educated doesn't automatically equal more intelligent

But sometimes it does and I think this is one of those times. The user starts the process by inputting his own prompts, which he creates. He gets out some number of drafts and then combines them into a master document combining the best points. Surely in this case education equals intelligence.

So when you speak with one of these "academic philosophers" that annoy you so much what do they *say*?? What case do they make? And how do you reply?

Fred

Expand full comment
Mar 26, 2023Liked by Alberto Romero

A large bravo from 🇫🇷

Expand full comment
author

Thanks Lionel!

Expand full comment

Another business question: One of the strengths of TAB is your engagement with your audience. As TAB races beyond 10,000 to the big time, you're likely to become overwhelmed at some point. Have any plans for that? An assistant perhaps?

Expand full comment
author

Good question Phil. Right now I don't feel like I need it but of course, it may be a problem in the future. An assistant would definitely help but haven't given it much thought yet. Answering comments wouldn't be a task for an assistant as you most likely value that it's me who responds!

Expand full comment

Whoa! 10,000! You are the man Alberto. Impressive. If you had the time you could probably do well with a "how to succeed on Substack" blog.

I doubt this is a good plan for you, but if you don't already know, there is another Substack-like blogging network called Ghost which has flat fee pricing instead of them taking a percentage like Substack. The percentage take formula may become more of an issue for you as you continue to skyrocket your audience numbers.

I just spent a couple hours checking out Ghost and have a review hitting my site tomorrow morning. Overall, Ghost looks pretty good, though it's a far smaller network than Substack. I have no intention of moving off Substack myself, but I'm happy to have a better idea where I might move if the need arose.

As for AI questions, Mr. Predictable Boomer Doomer still hasn't heard a satisfying answer to this question on any AI blog:

QUESTION: What are the compelling benefits of AI that justify taking on more risk at at time when we already face so many?

Expand full comment

Yet another consideration:

You don't own your URL, Substack owns it. Although you do own your content, technically you are publishing your content on their site, not on your site. If you had to move some day, all the links to TAB from elsewhere would likely die. Your search engine traffic would likely be affected too.

You could address this with Substack's custom domain option. So then you would own your URL and could move it anywhere if needed. Before you made such a change you'd probably want to see if Substack can redirect any traffic to your current URL over to the new one.

If you did decide to get your own domain name you could consider using DirectNic.com as your domain registrar. I've been using them for 20 years and so far so good. No business relationship, just a happy customer.

Expand full comment

Another benefit of Ghost is that you can download the blogging software for free and install it on your own server. This would make your investment in TAB completely independent of any host. The more successful your business, the more important such independence is.

The downside to this option is that you would then be responsible for the technology. This price tag can be offset if you can find and hire someone who is skilled in the software and can address technical issues for you. Ideally, you would know more than one such person.

I have no reason to question the reliability of Substack at this time, but 30 years of web development has made me wary of all hosts.

One concern I would have about Substack is that they are offering a quality service to everyone on the Internet for free, which means that increasingly this network will be dominated by people like me, who aren't generating any revenue for Substack. We can see this underway already in that while support is friendly, it's pretty mediocre compared to most hosts. However, as TAB becomes ever more of a revenue generator for Substack, you'll likely get special attention.

Bottom line, there's no reason to move at this time, but it would be wise to know how you would move to another host should that become necessary. TAB is on the road to becoming a Big Dog, and protecting your investment will become more of an issue going forward.

Expand full comment