
A Capability Systems Engineer?
Yes, a Capability Systems Engineer. I did invent that job title. But I think it’s fair. There’s no conventional way to describe what I’ve been doing the past few months. I didn’t think to try and name it until I read an article that highlighted the importance of a job title in signalling the value of the role for the person delivering it, and as a signal of key value creation within a business. It made me think, what would I call what I spend most of my time doing at the moment if I had to specify a job title?
Even though the Essential Skills Series books are the first visible product of Edaith, I’ve avoided calling myself an author or Edaith a publisher. I never set out to build a print publishing house; my goal is to create better ways for people to understand and master transversal skills. I always imagined this would be through building things with technology, and books first seems like a good place to start (case points: Canva, Amazon).
We are hardwired to forget most things we encounter. Print books are a way to address the limitations of our memory and create a delightful tangible experience for the learner (if done well). As long as we are physically embodied in the world, having and holding a beautiful object is something special. The quality of research and information curation coupled with my obsession with good design is something I’m proud of, and they’re the foundation for Edaith’s mission to enable people to more effectively upskill.
Since the Problem Solver Essential Skills Series guide was released the feedback has been that people love the book, but some feel hesitant or overwhelmed to engage with the information and practices identified. At the same time, the skills they are covering are as important as ever. Technology transformations are providing new opportunities for people to deliver their work, do things they’ve always wanted or move into new types of roles. Uniquely human skills are needed to achieve complementarity between people and new technologies—transversal skills enable people to thrive in this process.
So while I’ve been developing the next skills guides I’ve been trying to decipher how to make the content from the series part of a blended learning system (online and offline) that solves capabilities development challenges for teams and organisations in a way that is centred on empowering each individual.
People want learning that is time efficient, personalised to their existing knowledge and experience and can empower their work in-practice today.
Companies want scalable resources and measurable ways to support capability development to understand training effectiveness and ROI.
Capability building needs better measurement
From my city planning policy days, it’s always been core to my work: if you can’t measure it you can’t effectively change it. When innovation consulting after my PhD, measuring outcomes of intrapreneurship and entrepreneurship programs was a key challenge. I developed a suite of post program indicators to better understand and quantify the outcomes and impacts of training formats, and for program participants I developed post-workshop questionnaires to capture their satisfaction levels regarding the learning experience.
These measures primarily served our program development and marketing data points. Whilst we certainly provided insights to organisations to show the value of engagement, individuals were none the wiser regarding the extent their skills were proficient before or after the training program.
The impact of learning and development programs should be clearer for individuals seeking to effectively upskill and organisations investing increasing budgets into developing people.
Quantifying individual learner understandings and knowledge acquisition isn’t something that training providers are incentivised to do. Courses and programs are designed generically so that they can be rolled out en masse and measuring individual capabilities would place them on the hook to deliver true results for each participant. The current standard is that learning and development has been ‘successful’ is if a person simply participates in a training format i.e. shows up and hopefully pays attention, or if a person passes an assessment, exam or quiz based on it’s standardised curriculum i.e. replicates content.
Vibe coding to context engineering
Over the last year I’ve been playing with Claude for coding design features for my Shopify site (before there was the Shopify inbuilt assistant chatbot to do that) and I tried a couple times to make quiz website for the Problem Solver skills guide, but wasn’t happy with the results so put it aside and worked on other things.
I was also using Claude as my IT ‘person’ whenever there was a technical thing I wanted to do, but didn’t know how. Just asking for step-by-step instructions how to do things with an application that I’ve never done before but knew should be possible. What magic!
I tried to learn Python a few years ago with the intention to use it for research, but gave up at the point of needing to build a project (and I was pregnant with my third child at the time, may have lowered my tolerance threshold..). But working with Claude in chat, asking for outputs and then asking further questions about things I’m curious about in the code has helped me better understand programming logic better than when I tried to do a course.
Can I personally code anything from scratch? Absolutely not. But can I build software applications through instructing and iterating with an LLM, and implement code until we find a solution to the bugs and limitations that arise during development. Absolutely.
Enter capabilities assessments: Personalised learning tools
Over the New Years holidays break I gave myself some time to play with an idea around breaking up the skills I cover in the Essential Skills Series Guides into frameworks of competencies, then enabling people to engage only with aspects of a transversal skill that address their particular shortfalls or knowledge gaps.
I made a prototype of a situation and scenario-based self-assessment system using generative AI capabilities—a topic I’m familiar with from my work since 2019—as the first example. There is some great German research just published by a research institute funded by the government to enable AI adoption that I built on with additional aspects from my research identified as central to generative AI proficiency. This deconstructs the skill into 21 competencies across different spheres to identify people’s strengths and areas that would benefit most from development. Besides enabling targeted development, just showing the skill deconstructed enables people to better understand the diverse aspects of a transversal skill and that they require agency in domains that we might not immediately associate with a particular skill.
When you complete the form for the chosen skill, within a couple minutes you receive an Individual Report that explains your results and identifies development recommendations based on your responses. The report includes an overall proficiency score and scores for each of the competency spheres within that skill. It also identifies an orientation you currently align with when applying the skill.
There’s 4²¹ = 4.4 trillion possible combinations Individual Report results! Naturally, responses will cluster into a far smaller subset of those combinations. Either way, each person that undertakes the Capability Profile will receive a unique report that reflects their unique development needs and strength-based opportunities.
