Sunday 29 March 2020

Ruby with a Side of Sorbet

I come from a world of types. The languages I know best (C, C++ and Go) are all statically typed languages. Living in the dynamically typed word of Ruby, PHP or Javascript you would think would feel freeing. Instead, I feel annoyed and frustrated at my loss of a security blanket.


As early as PHP 5.0, types started to slowly creep into the language by way of type-hinting. As of PHP 7, typing is becoming more prevalent and I, for one, applauded it. My first job was working on a Managed Services team whose job was to fix broken sites and we worked in PHP. A vast majority of bugs would have been caught and fixed with typing or proper index checking.

Other developers who had never worked in another language, or only flirted with typed languages in school, didn't understand why I cursed PHP. Dynamic typing was freeing and easy, why would they want to use typing?

At that time, I extolled the virtues of statically typed languages and urged many to at least start using type hinting. I gained many a convert and soon most of the company started to adopt it, certainly for any new software we wrote.


When I joined Shopify, I was returned to the dynamically typed world with Ruby. Ruby was like PHP on steroids. Much of what it does worries me greatly but, thankfully, there's a lot of excellent developers at Shopify and most of the code I deal with is pretty well written. We have style guides that forbid, or at least strongly discourage, some of the gnarly things you can do with the language. There's a heavy emphasis on writing good quality tests, a gambit of code review, and a concerted effort to improve anything we do.

That said, some of the same types of bugs I encountered in the PHP world were repeated by developers in the Ruby world. Issues that, once again, proper typing could have caught. However, this time around, I decided not to fight it. I am surrounded by some pretty strong developers and I was willing to continue to learn Ruby to see if proper techniques could further reduce bugs.

To be honest, they can do a pretty good job and I can certainly see why, when you know what you're doing, you can avoid most of the pitfalls of a dynamically typed language.

And yet...


There's a growing number of developers within our core product that have been pushing for the adoption of static type analysis in Ruby. I'll spare any details on history and alternatives; suffice to say, Sorbet is the solution currently being promoted.

The first time I tried it, I hated it. Then, many months later I bumped into one of our core developers during a conference and had lunch with them. During our conversation, I brought up Sorbet and we got into an interesting discussion in which he encouraged me to revisit it.

You see, when I first tried it, I was still a relative novice at Ruby. Sorbet felt foreign and tacked on. Some of the errors it gave were cryptic and, quite frankly, bizarre. I also had made an important mistake. I'd turned on the strictest level of typing.

Ya, don't do that. Not yet, anyway.

This time, I took the recommendation that gradual typing be used, even on a new project. Slowly increase the typing level as you gain stability and see how it goes. At the current maturity of the typing system, I think this is the wise choice.

The best part? It made some recommendations to write better Ruby. Not only that, it caught several bugs that my tests hadn't exposed! It even taught me, and a peer who is a long-time Ruby developer, something that neither of us knew about the language (that the initialize method should be private)!


Use Sorbet! It feels weird, like a new pair of underwear but once you get used to it, you'll be thankful for it!

Saturday 28 December 2019

React - A Follow Up

Note: This is a follow-up article to my first experiences working with react.

Shopify is full of many intelligent and skilled developers and, now that I work with them, I've gained the benefit of the experience of those much more experienced with React. We maintain are own React component library called Polaris but, that said, many of us are still novices at writing React apps. Now that I've gained more experience, I have more thoughts to share!

1. App Structure

I thought I'd gotten better at this but many screw ups later I've realized I still have much to learn. Having a src/ directory is standard for React apps but it isn't always clear how to organize the sub-directories. There's also a lesson in writing components that I had to learn.

A components/ directory is create and you can nest components in whatever way works best for you, provided there's sound logic to it, but knowing how to write those components can be pretty confusing for the novice React developer. The key is understanding how to utilize dumb, generic components alongside domain specific components.

A single page application can usually be broken down into separate pages and it makes sense to have a pages/ directory. These pages are typically pretty domain specific and might contain sub-directories for components that only ever relate to that particular page.

Another option might be to create domain specific directories. A blog app could potentially organize different views into directories, like posts/, comments/ and users/.

Continuing the blog example, since you might have a list of comments, a list of users or a list of posts, it stands to reason you should only have one component to display them, wrapped by a domain specific version if necessary. Creating List and ListItem components that are wrapped by CommentList and Comment components respectively, lets you optimize your component usage. It stands to reason that, for the most part, your lists will have some common properties.

2. Per-Component CSS

So, now we have CSS-modules. A blessing and a curse. On one hand, it removes the problem of namespace collisions and unwanted style-sheet cascading issues. It makes your CSS much simpler and removes much of the necessity of depending on SASS or LESS. On the other, it's more of a pain in the ass when you want to override a style. Good structure and organization of your CSS and where you apply certain styles is key.

Overall, I'd call them a net gain but not without their own warts. Regardless, I still highly recommend per-component CSS!

3. Network Communication

As nice as promises and the Fetch API are, I have to say I now prefer Axios. "Fetching" a POST just feels odd. Not being able to catch non-network errors (40x responses) within Promises feels non-intuitive; it requiring extra boilerplate.

Axios just feels more natural. The verbs for performing actions make sense. 40x responses can be caught instead of being checked for. Browser compatibility, if that's a concern, is covered.

And now...

That's about it! Nothing too revolutionary but still, some nice tidy lessons from being in the trenches.

Ta-ta for now!

Tuesday 30 July 2019

My First React App And What I Did Wrong

I recently had the pleasure of using React to build an internal use only app for Acro Media. I say pleasure because it genuinely was despite the fact I was met with many frustrations with using React for the first time. To be fair, these were more my own shortcomings trying to learn a framework I'd never used before, rather than the framework itself.

As with building any app, my failures taught me far more than my successes. Here are those failures:

1. Poor Structure

This stems mainly from the fact I didn't understand what React really was. I was aware, of course, that parts of my page should be abstracted into components but I didn't understand how to truly utilize that. I didn't, for example, build my components in a way that would make them reusable in a concrete way. Not only were they not reusable within the app but they couldn't be copied into another app, either.

I didn't understand that even though I created a table component, with header and body sub-components abstracted out, I made them very specific to the app. While this app only needed the one table, and so this worked out fine in this instance, I would have had to do a lot of refactoring when I had the need to add a second one. Instead, I should have created a generic table component and used props or higher order components to customize it.

Finally, I used a single src/ directory to hold my handful of components. Even though this worked fine in this very specific case, if I'd laid out the project, and thereby the components in a more discrete way, I would have required a better defined structure.

Generally speaking, you'd have a Components/ directory separated into components, like Table/.

2. Poor Per-Component CSS

When I began the project, I was aware you could import CSS into a React component. I assumed, wrongly, this was just a nice way to abstract CSS into smaller chunks. While this is indeed a benefit, how wrong was I?

Thankfully, before the project's end, I realized the power of using per-component CSS. I was further relieved to discover how I could leverage SASS and Webpack with React to make a very powerful combination of tools.

3. Poor Understanding of State

This was highly frustrating. I had only two main components: a form in which you could set parameters with a button to submit them and a table where fetched data should be displayed. A classic "report" app. The problem was, these components were sister components, on the same level, with a common parent but I didn't understand how to react (lul) to an event in one component so that the other would automatically get updated.

Searches repeated suggested using Redux but I was convinced that route was completely overkill for what I was trying to achieve. I wasted a heck of a lot of time looking into Redux and other solutions. Shouldn't it be simple to link components up? Of course, I could use props from the parent to provide callbacks to update state in the App but that felt so wrong.

How can something that feels so wrong be so right?

Maybe there is an even better way but as it turns out, using Hooks (or state in a class) is really the easiest way. Store the state of the app in the top level component, pass a state update function into the form component and pass the state of the app as a props to the table component. Easy-peasy.

Another instance might be to create a global "state" class and you could dispatch updates from components but, honestly, you may as well use Redux at that point.

4. Lots of Classes

The documentation confused me. If I used a function to create a stateless component, and I used a class to create a stateful component, why not just make every component a class? Why did React prefer or encourage functions over classes?

Well, in 16.8 at least, you almost never need classes any more and using functions is actually a lot more straight forward, requiring fair less boilerplate. After many hours of use, I came to rewrite all my class based components to functions and I couldn't be happier. Viva function components!

5. No TypeScript

Not everyone may be a lover of TypeScript, and I'm certainly not an evangelist either, but I also appreciate what it's trying to do. I am a fan of strong, static typing and TypeScript is a good step in this direction.

When I started the app, I had some logic only portions written in TypeScript. Unfortunately, as development ground on, I ended up converting this code to pure Javascript because of a frustrating bug. I had concluded, wrongly, that something was being happening in the transpiling that was causing my issue.

As it turned out, this was not the case and I regret that I never converted the code back to TypeScript. Whoops.

6. No Object Destructering

Prior to this project, I wasn't familiar with destructuring in Javascript. After learning about it, I used array destructuring any time I used a React state Hook but, for some reason, I still didn't know about object destructuring. This in spite of the fact I came across examples that used it. I just didn't comprehend what I was seeing.

Now that I know about it, I wish I'd used it. Ah, well...

Things That Went Right

Here is a list of things, without explanation, that went right:
  1. Yarn dependency management.
  2. SASS.
  3. Fetch API and Promises.
  4. Webpack
That's all the wisdom I shall pass on.


Saturday 6 April 2019

Trying Ruby

I've never used Ruby. I have friends who swear by it and the Rails framework is incredibly well known. All I know about the language is that it's dynamically typed and incredibly flexible. A million ways to do any given task.

Now, I am not typically a fan of dynamically typed languages. I use PHP in my current job and while its "typeless-ness" can be useful and allow for quick prototyping of ideas, it is also fraught with many pitfalls that bite you in mysterious ways.

That being said, you can program in PHP, and I suspect Ruby, in a safe and responsible manner and I find myself interested to see what I can do.


To get started, I've gone to Ruby's main website. I normally work in Ubuntu but today I'm on my Windows machine. A quick skim over the page and that leads me to the Windows Ruby Installer page. I chose the default of installing Ruby with the devkit, which includes MSYS2 so gems that require C can be compiled. It seemed like a reasonable choice since I was new to the language.

After running the installer, I was given the choice to run an MSYS2 utility for further installation configuration. I chose the defaults for all options.

I also decided I wanted to install Ruby on Rails. The tutorial page for the framework warns you should get to know Ruby first, and I probably should, but I'm going to go ahead and install the framework anyway. I am already aware of the gems package manager for Ruby and used it to install Rails per the Getting Started document on the website. It also looks like using the Windows installer was a good choice as it includes sqlite3 by default which the Rails tutorial recommends. Excellent!

Getting Started

These tutorials can be a mixed bag. With programming languages, they tend to be slow, boring and uninformative; targeted at new programmers and not experienced ones. That said, I've opted to give the Ruby quick-start Ruby in Twenty Minutes a whirl to see what's up.

It encouraged me to start up the REPL for Ruby and do some basic tasks. Some notes:
  • There's an exponential operator (**).
  • Variables are dynamically declared, as expected, and require no special characters to denote a variable.
  • Function declaration are reminiscent of Pascal where you declare the function start (def) and close the block with end. An interesting choice.
  • Functions can be called with or without brackets. I'm not a fan of that and I think in my own code I would always include the brackets.
  • String interpolation is nice if not slightly unconventional compared to other languages with the feature.
  • I was glad to see there's some kind of notion of private/hidden class fields/properties.
  • The ease of use of reflection is interesting.
  • Accounting for variables types will likely require a lot of boiler plate. 
  • Using `if __FILE__ == $0` reminds me of Python.

Initial Musings

There are certainly things I like and things I don't. I prefer statically typed and strongly typed languages. Ruby is dynamically typed and weakly typed. Like with PHP, it's just something you have to live with or work around.

It might be a bit premature to say after only spending a few minutes with the language, but it seems that like with most dynamically typed and weakly typed language you have to expend a fair amount of effort with boilerplate to determine what type any given variable is to take action on it. That said, the duck typing and using reflection does cut down on this a bit.

Coming bundled with an industry-wide accepted package manager is a refreshing change of pace. Package management is a tough topic and many languages still struggle with it. NodeJS, for example, has multiple competing tools. Go is only just starting to figure out its own management system. C and C++ don't really have one unless you could system tools like yum or apt.

Honestly, this is my second look at Ruby. In my first attempt, many years ago, I can quite confidently say I did not give it a fair chance and I didn't like. That's not Ruby's fault but my own. This second go around has left me with a different impression and I look forward to delving in further.

Going off the Rails

Where's the Rails portion I talked about? Well, that will come later.

Ta-ta for now!

Saturday 3 November 2018

Docker for Developers - An Introduction

While Docker has been around for a few years, it is fairly new to me. The organization I work for has only been using it for about a year but I've never had to delve to deep into using it.

In most cases, I run a command to start my containers and start working. It all just works out of box, as it was intended.

However, I recently had a need to setup a new project on my own and using our standard setup wouldn't work and I realized I really don't know much about Docker. So, this article about a few key parts of Docker that I struggled with in the hopes that others might learn something of benefit.

Images and Words

Dream Theater references aside, one key aspect of Docker I didn't understand was the different between an image and a container. I thought the words were interchangeable but, alas, they most certainly are not.

One metaphor I read about was that an image is to a class as a container is to an instance. I think that's fairly accurate. Saying that a container is just a running image isn't quite right because it doesn't capture the full picture. A container is not only a running image but also has additional configuration and may have a filesystem attached.

You can have two containers, two instances of an image, with different settings. This is important to note.

Monolithic vs Multiple Containers

Another question was: "why one would want to run multiple containers orchestrated to work together rather than a single, monolithic image."

At first, I thought the only real difference was the argument between statically linked binaries and dynamically linked ones. Why go through all the headache require the use of additional tools, like docker compose, just to achieve the same thing as can be done with a single image? Turns out, that's not the case.

Of course, reproducibility is a key part of developing an application. The ideal situation is having a development environment that perfectly reflects the production environment so that any bugs or issues can be replicated easily. A single docker image does that job just fine. In fact, in many organizations, web applications are deployed using containers so that the development environment and production environment are, quite literally, identical from a software point of view.

However, unlike a binary, where libraries are linked at compile time and become static dependencies, Docker containers behave more like plug and play peripherals.

A perfect example is adding a container to trap emails sent by an application. In a development environment, you can quickly spin up a container containing a tool like Mailhog to capture emails without affecting the rest of the environment.

Another advantage is that you can mock application services with a dummy container or even swap different versions of a dependent service without having to rebuild the image each time.

Imagine needing to swap between different versions of PHP. You could have 3 complete images of 500Mb each to cover PHP 5.6, 7.0 and 7.2 or, you could have 3 different PHP images that are 50 Mb each. Need to add a new PHP version to support? Easy! Just pull down another PHP image rather than be force to rebuild an entirely new image for your app.

Configuring Containers

I'm a bit embarrassed to admit it but I didn't understand how to configure a container. I assumed that you must have to build an image, instantiate a container, open an interactive terminal and perform the configuration. The idea felt all wrong but I didn't understand. The reason it felt wrong was because, well, it was wrong.

The problem was, I didn't know what Volumes were or how to use them. I also realized I didn't understand how containers were persistent or how to access files on a local computer. I assumed, again incorrectly, that you must have to copy the files into the image or container. Again, how wrong I was.

When starting an Nginx or Apache server, you need to be able to configure it. Whether it's core configuration or virtual host configuration, you need to supply this information to the web service. Did you create the file outside the image (yes) and then copy it in (no; well, not exactly)?

This is when the brilliance of Docker set in. When I understood how this worked.

You can configure the web server using the configuration files for the deployment server, whether they use containers or not, by attaching them to a container via volumes.

Volumes are exactly like file system mounting. Take a directory, or file, on your local system and mount it to a directory in the container's file system. In the case of Nginx, you might attach a "myapp/nginx.conf" to "/etc/nginx/nginx.conf" of the container. Then, when the container is started, it uses your local filesystem's file as if it were part of its own filesystem. A little like a chroot. It's brilliant!

This, of course, opens all kinds of wondrous doors! Need to import a database or assets for a web application? Use volumes to mount them where the live system would expect them to be! It's great!

This was a crucial part of the system I was missing and explains one aspect of why images are so reusable. You can have a single image whereby container A uses volumes for project A and container B uses volumes for project B. This comes at a major size savings.

Communication is Key
If configuration, volumes specifically, was the biggest hurdle I needed to get over, then the second largest would be intercommunication. How in the hell do different containers communicate?

If one image contains Apache, a second image contains PHP, a third image contains MySQL, how in the world do they talk to one another? The web server acts as a proxy to PHP (presume php-fpm) and thereby forwards paths to scripts to that container. PHP runs the code attached to the container in a volume but that code needs access to MySQL which is, again, in another container.

Within a single image or server, all these programs live in one space and so intercommunication happens (typically, but not always) through localhost. So, how in heck does it work with multiple images that have no knowledge of each other?

Enter networks and some Docker magic. To use multiple containers, you need to use docker compose, hereinafter referred to only as compose. Compose does two things:

  1. Create a network to communicate by.
  2. Sets the hosts file up in a container with the network names.
Now, with some simple setup, different image can communicate with each other as if they were all together on a single environment. Far out, solid and right on!

Alpine Skiing

Last, and I think a bit more minor, might be what these Alpine images are. Alpine is a small Linux distro specifically created for Docker. An Ubuntu image is quite large. Unless you have a specific need to perfectly imitate an Ubuntu system, Alpine is an excellent choice.

However, I have discovered a potential best of both words that I haven't tested yet:

Anyway, that's the gist of Alpine and its primary use.

Wrapping Up

That about covers the things I wanted to talk about. With any luck, a fellow newb will learn something and help them on their way to devops greatness!

So...ya. Bye!

Thursday 6 September 2018

C++ Text Template Engine - Part 1: Overview

This post will be the first in a series of undetermined length in my journey to write a text template engine in C++. No doubt, something like this already exists but those I happened to look at didn't really wet my whistle. No only that, there are aspects of C++ I want to explore and this would give me an excellent opportunity to do just that.


I would classify myself as an intermediate programmer overall and a somewhat novice to C++. Certainly, I am drastically behind the times of modern C++ and I was never a strong programmer in the language to begin with. So, the first motivation is to help improve my C++ chops a bit. Why?

Well, that leads me to my second motivation: Not long after I started working for my current employer, I was tasked with maintaining our in-house systems and they were authored almost 20 years ago. The main software functions much as a modern web-server except that it runs like a CGI script. Each page load invokes a program to build the page and output it over HTTP. Some of these pages are ludicrously complex and are all done through print (std::cout/std::ostream) statements. It's horrific to maintain.

As part of the effort to make the code base more maintainable I desire to have a template engine so I can create templates and inject data into them. Due to budgets and our main focus of development, completely replacing the system isn't currently in the cards but we are frequently making changes to it. So, in the meantime, I need to maintain it.

Recently, I created a stop-gap by using regex to perform search and replace on a template document. It has drastically improved the situation but falls far short of the mark. I'd like to further improve the system.

General Overview

So, the plan is to document the entire process. Including the mistakes. I want to give a sort of free form exploration as I meander through the process and hopefully you, the reader, learn something along the way.

The idea here is that I want to demonstrate a little about what agile programming looks like and demonstrate how the thought process works when scoping and building out a moderately sized piece of software.

I've already gone on long enough, so it's time to find a place to start.

User Stories

A good place would be to create user stories. These are just statements that describes what it is the user/client (in this case, me) wants. I used to think them quite silly and pointless but I've recently come to appreciate them.

They give you a starting point to begin a discussion and they help keep you on track. Too often I've found myself over-engineering or falling short of client expectations and it usually has been a result of not fully understanding their needs. That's where the user story comes in. It serves as a jumping off point to understanding what they want.

Story 1: As a developer, I would like to create a document, to act as a template, that can have portions of it replaced dynamically to create a final document. This would ease creating pages for my application by avoiding the need to construct these pages with print statements.

Story 2: As a developer, I will need to be able to use any basic data type as well as some standard library containers.

Story 3: As a developer, I need a method to perform conditionals and loops. I have lists of users and accounts I will need to loop over and print.

Story 4: As a front end developer, I want the syntax should be easy to learn and familiar to me. I don't want to spend a lot of time learning another

Story 5: As a developer, it would be useful that any classes I currently have in my code base be compatible or could be integrated easily into this system.

Initial Thoughts

Perfect. So, what can we glean from that?

I need to have a source document, maybe a file on disk or a string. Reading files was not part of the user story so I think I should dismiss that for now. All I care about is the actual source and either a data stream or string will suffice as that lets me take data from any source and allow me to transform it. I worry a little that a data stream might be a pain to work with so I'm thinking that I'll want the data source to be passed in as a string. That more or less covers story number one.

In the second story, I need to be able to access basic data types like an integer, float or character string. I also need to be concerned with more complex data structures like vectors or maps. Classes will be addressed in the last user story so I'll forget about that for now. Off the top of my head, I'm thinking I'll need to have some kind of base value type with several derived classes for each basic type. I'm thinking ahead a bit here so I'll leave it at that for the moment.

User story number three asks about a conditional. That could be something like an if statement or switch statement. I want to keep things as simple as possible, I will likely just use an if statement with and else clause. This story could use some fleshing out.

For loops, I feel am faced with the classic for and while statements. In the interest of simplicity, I will only pick one. Since I will likely be passing data structures into the template engine a for loop with an each or range behavior likely makes the most sense. In the Go standard library, they eschew the for and while keywords for the range keyword and I kind of like that but that may be problematic for user story four. Perhaps a keyword of foreach might be apropos.

A familiar syntax for front end developers would be one like Mustache, Django or Twig. I think if I adopt something similar, that will ease any mind-share. While some systems use different tokens for commands and identifiers I think I can safely use one and just enforce some simple naming rules common to almost every programming language.

Finally, I need to be able to support existing classes. To my knowledge, C++ does not have reflection and converting large classes of data into maps would not be fun or efficient. I'm thinking that I can maybe create an interface by which my existing classes could fulfill in order to make them compatible with this system. I'm going to have to think more on it but I need more information first.

The Stage is Set

That completes the initial breakdown and user stories. The next phase will be discovery and I'll cover in the next article.

Sunday 19 August 2018

Writing Interfaces in C++

After my last article, I got to thinking about utilizing interfaces in C++. I am not an expert in C++ by any means and most of the code I have to work on is both antiquated (C++ 98) and poorly written (even for its time). Most of my time is spent writing PHP and Go, so using interfaces is quite common.

Interfaces, abstract classes in C++, are not used at all in the code bases I work on regularly. It got me to thinking: "could Go or PHP style interfaces be done in C++?"

Virtual Recap

A virtual function is one that is defined in a class that is intended to be redefined by derived classes. When a base class calls a virtual function, the derived version (if it exists) is called.

A key point to make is that virtual functions do NOT need to be defined by a derived class.

To see an example, check out my previous post.

Pure Virtual Goodness

In PHP, you would call these abstract functions. In C++, they are called pure virtual functions. Perhaps a better thing to call them would be purely virtual. I feel like that term makes it more clear to the purpose of this feature. Declare that a function exists as a method on a class but leave the definition for later. It exists purely in a virtual sense.

Abstract functions are those that are defined in a lower, base class and implemented by a higher, derived class. Class A provides a prototype for a function but class B, which extends class A, actually implements it.

A pure virtual function is a contract. A class which extends a class with an abstract, pure virtual function must implement it before it can be instantiated. This guarantees that the function will exist somewhere in the class hierarchy. Otherwise, it would be no different than calling a function that has not been defined.

A pure virtual function looks like this:
virtual void Read(std::string& s) = 0;
This declares a virtual function Read. The notation of assigning zero to the function designates that the function is purely virtual.

Abstract classes as Interfaces

A class with only pure virtual functions is considered to be an abstract class. Since C++ does not have interfaces in the same way other languages do, abstract classes can fulfill the same role.
class Reader {
    virtual void Read(std::string& s) = 0;};
The above class is fully abstract. Any classes that extend it will need to implement the method Read.

Or does it?

Compound Interfaces

I am a proponent of small, simple interfaces that can be combined to create ever more complex ones. This is the Interface segregation principle from SOLID design principles. The problem lies in the fact that C++ requires derived classes implement pure virtual functions. Thankfully, there is a way to get around that restriction.

Extending a class with the virtual keyword tells the compiler that the derived class will not be implementing the pure virtual function either, making it abstract as well. This allows multiple interfaces to be combined.
class Reader {
    virtual void Read(std::string& s) = 0;};
class Writer {
    virtual void Write(const std::string& s) = 0;};
class ReadWriter : public virtual Reader, public virtual Writer {};
Here, the interfaces Reader and Writer get combined into a third abstract class ReadWriter. It too, could add further pure virtual functions if desired.

Implementing an Interface

Implementing the interface is the same as any deriving any class. So, to tie everything together, he's a complete example:

class Reader {
    virtual void Read(std::string& s) = 0;};
class Writer {
    virtual void Write(const std::string& s) = 0;};
class ReadWriter : public virtual Reader, public virtual Writer {};
class SomeClass: public ReadWriter {
    std::string buf;public:
    void Read(std::string& s) override { s = buf; }
    void Write(const std::string& s) override { buf = s; }
void readAndWrite(ReadWriter& rw) {
    rw.Write("Hello");    std::string buf;    rw.Read(buf);    std::cout << buf << std::endl;}

int main() {
    SomeClass c;    readAndWrite(c);    return 0;}
The on caveat is that abstract classes must be some kind of pointer, either as a standard pointer or a reference pointer. This requirement makes sense since it can not be instantiated.


Using pure virtual functions and virtual classes, it is indeed possible to describe behaviour as would be done in other languages with interfaces. Pure virtual functions have a further use in multiple inheritance. To learn more, check out this StackOverflow answer.

Happy programming!