Jayanth Kumar's Blog

October 15, 2025

January 24, 2025

A Pendulum Swung Too Far — Again

A Pendulum Swung Too Far — Again: Reflections on AGI and the Cycles of ProgressPrologue

In the unending quest to build intelligent systems, we find ourselves at a crossroads once again. The march toward Artificial General Intelligence (AGI) is dazzling, relentless, and, at times, utterly blinding. For every breakthrough, there is an unspoken undercurrent: Have we swung too far — again?

Swing of the Pendulum

Progress, like a pendulum, oscillates. It does not move in a straight line. It sweeps from one extreme to another, dragging entire fields of inquiry with it. In Kenneth Church’s brilliant paper, “A Pendulum Swung Too Far,” he chronicles this very phenomenon in the evolution of computational linguistics. Rationalism gave way to Empiricism, theoretical rigor bowed to data-driven pragmatism, and somewhere along the way, the pendulum reached its apex.

Today, in the AGI revolution, we see the pendulum swinging with similar ferocity. Scaling dominates our collective psyche. Compute, data, and parameter counts — these are the new gods of progress. We marvel at their power, but do we pause long enough to ask: What are we leaving behind in their shadow?

History as a Mirror

In the 1990s, for example, empiricism surged because we suddenly had access to unprecedented amounts of data. Researchers pragmatically tackled simpler, solvable tasks like part-of-speech tagging instead of obsessing over AI-complete problems. But as Church highlights, this pragmatism has its limits. Many of these methods (like n-grams and finite-state systems) work well for short-distance dependencies but fail to capture the big picture — long-distance dependencies and deeper linguistic insights.

Church reminds us that in the 1990s, the rise of empiricism was a necessary corrective. Data had become abundant, and researchers, reeling from the overpromises of earlier decades, turned to pragmatic, solvable problems. But in the heady rush to harness data, foundational questions — those dealing with long-distance dependencies and the nature of meaning itself — were left by the wayside.

Empiricism’s triumphs were real but limited. Methods like n-grams and finite-state systems delivered results, but they could not grasp the deeper, more intricate fabric of language. It was only later, when the pendulum began to swing back, that the field reconnected with these deeper truths.

And now, AGI finds itself at a similar inflection point. Scaling — massive models, billions of parameters, unfathomable compute — has delivered stunning results. But let us not mistake these results for true intelligence. Let us not forget that scaling, like empiricism before it, is a means, not an end.

Often breakthroughs come from revisiting the so-called “limitations.”
The Illusion of Progress

Here lies the bitter truth: the systems we celebrate today — transformers, large language models — excel at optimization but falter at understanding. They are pattern matchers, not philosophers. They mimic intelligence but do not embody it.

Church’s pendulum teaches us that this is not failure; it is inevitability. Progress is cyclical. The limitations we ignore today will become the fertile ground of tomorrow’s breakthroughs. The deeper truths of AGI — reasoning, causality, adaptability — are waiting to be rediscovered, just as syntax and semantics waited in the shadows during the empirical rush of the 1990s.

Scaling will plateau. It always does.

And when it does, we will be forced to confront the questions we have deferred:

What does it mean to reason?How do systems adapt to the unknown?Can intelligence exist without understanding?The Duality of AGI: Scale vs. Depth

If there is a lesson here, it is this:

Progress requires duality.

Scaling and depth, pragmatism and philosophy, engineering and science — these are not opposites. They are partners, two sides of the same coin. To swing too far toward one at the expense of the other is to court stagnation.

The AGI revolution will not be won by scaling alone. The next great leap will come from harmonizing brute computational power with the subtlety of human-centric design. It will come from systems that do not just predict but understand, that do not just scale but adapt.

A Manifesto for Balance

To those working at the edge of AGI, here is the manifesto:

Revisit the Forgotten: The past holds treasures. Concepts dismissed as too narrow or outdated often contain the seeds of innovation. Look back to move forward.Scale with Purpose: Scaling is a tool, not a destination. Use it to explore deeper questions, not as an end in itself.Celebrate the Specific: Task-specific insights are not limitations; they are foundations. The leap to generality begins with mastery of the particular.Bridge the Divide: Refuse to choose between pragmatism and philosophy. Build systems that honor both.Question the Easy Wins: The low-hanging fruit of scaling will run out. Prepare now for the harder, higher-hanging challenges of reasoning, adaptability, and true intelligence.

The balance will come from accepting this duality: leaning into computational scale while staying grounded in the nuances of human-centric modeling.

Message

Don’t despair if your work feels task-specific or narrow. The intuitions you develop today could inspire a leap forward tomorrow. Keep looking for concepts that feel both powerful and general, and don’t shy away from asking, “How do we scale this?”. As Church warns, we need to strike a balance. Scaling is crucial, but let’s not lose sight of foundational challenges.

Low-hanging fruit is great, but we can’t stop there.
Epilogue
The pendulum will swing again — it always does.

Scaling will give way to introspection, and the glittering achievements of today will become the stepping stones of tomorrow. Whether you’re in machine learning, computational linguistics, or another domain, remember:

Progress isn’t linear — it oscillates. And in that back-and-forth, every step matters and in that oscillation lies the beauty of discovery.

The AGI revolution is not just a race to build bigger models. It is a journey to understand the essence of intelligence itself. And in this journey, every step matters.

What do you think? Are we, as a community, striking the right balance? Or are we destined to repeat the cycles of history, swinging too far before we swing back?

Let’s reflect — and, more importantly, let’s act.

[image error]

A Pendulum Swung Too Far — Again was originally published in Technopreneurial Treatises on Medium, where people are continuing the conversation by highlighting and responding to this story.

 •  0 comments  •  flag
Share on Twitter
Published on January 24, 2025 08:31

June 10, 2023

Building Configurability in SaaS

PrologueEvery change breaks someone’s workflow. (credits: xkcd comics)

So, you have decided to build a data-driven Software as a Service (SaaS) start-up after reading my old article — To pivot or not to pivot but after deciding on the business requirements of your application, you start contemplating what will be the right configurability of the software based on your requirements. You revisit the possible options for configurability, relook at the business requirements, and then, comes the problem:

The Problem

As you continue to build your new SaaS application and sell it, there is another critical aspect that demands your attention: building and managing configuration properly. The configuration of your application, including its technical and business settings, becomes crucial for ensuring the security, privacy, durability, and configurability of your SaaS.

In the following sections, we will explore the difficulties and importance of building application configuration correctly in the context of SaaS, providing insights and guidelines to help you navigate this complex landscape effectively.

The Solution

Configurability is a crucial requirement for any SaaS application, encompassing both technical and business needs. To address this, configurations are divided into two categories: technical configuration (referred to as the application manifest) and business configuration (referred to as settings and preferences). It is vital to keep both configurations separate from the code for security, privacy, durability, and ease of customization. Let’s explore each in detail:

Application Manifest

The application manifest consists of technical configurations that are specific to the application and may vary across different deployments, such as staging, production, or developer environments. It typically includes:

● Resource handles for databases, caches, and other supporting services.

● Credentials for external services like Amazon S3 or Twitter.

● Deploy-specific values like the canonical hostname.

According to the 12-factor app manifesto, these configurations should be stored in the environment. The manifest should support versioning, not only for the application but also for the manifest itself, including the manifest schema and versioning.

Guidelines

Some of these guidelines are adopted from the twelve-factor app methodology.

● Configuration settings should be stored and maintained as close as possible to the environment they apply to.

Placing application settings near the application clarifies their association. From a security standpoint, these settings can only be accessed and modified within their corresponding environment. For production settings like connection strings and passwords, it is essential to limit access. Application-specific settings should reside alongside the code or in the root of the application, while platform-specific settings should be available at a higher level, such as environment variables or web server settings. Website-specific settings should not be accessible at the web server’s root.

● Configuration settings should contain only environment-specific settings.

To maintain simplicity and clarity, configuration files should only include environment-specific settings such as caching, logging, and connection strings. All other settings should be configurable within the application itself.

● Configuration settings, especially secrets, should not be stored under version control.

Storing environment-specific settings (e.g., passwords, connection strings, tokens) in version control is a security risk. These values should be set and stored as close to their respective environments as possible, following a “just-in-time” mindset. Better use Github’s encrypted secrets or Gitlab’s secrets.

● There should be a single location for configuring application settings.

Application settings should be configured as close to the application as possible. Introducing multiple configuration locations leads to chaos and insecure maintenance.

● Configuration values should be automatically set during deployment.

To streamline the process of configuring multiple environments and settings, it is advisable to automate the configuration process during deployment. This ensures repeatability and reduces the likelihood of errors.

● Changing a configuration setting should not necessitate a new deployment.

Most often an application resource (for example, a web application) relies on a core resource (for example, a storage account). An application resource is dependent on the existence of this core resource and needs its settings to run properly. For example, a web application needs a storage account key to access a blob storage, or a frontend needs an URL and token address to access an API. Changes in the core resource and its settings shouldn’t result in a redeploy of the application resource, which relies on it. When an API has many unknown clients changing the URL shouldn’t impact these clients. This can be challenging but there are many solutions for it.

Realization

They can be more Platform-centric like settings for Cloudfoundry or application-centric like GE Predix Apphub Configuration. For example, these are the common attribute sets for manifest for Cloudfoundry:

● Buildpack/Image

● Command

● Instances

● Memory quota

● Disk quota

● Health Check (endpoint, timeout and type — port, process, http)

● Metadata, path, and routes

● Environment Variables

● Services

Architecture

The application manifest configuration, which can be realized as the key-value store can be coupled with the service registry and service discovery while load balancing can be coupled with the reverse proxy router for the services (health-checking can be supported by either). Some of the tools for service configuration are:

Apache Zookeeper

Consul

Etcd

Eureka

The most used Consul provides first-class support for service discovery, health checking, and Key Value storage for configuration. Eureka only solves a limited subset of problems of health-checking, service registry, and discovery, expecting other tools such as ZooKeeper as a distributed key-value store to be used alongside. Etcd is the most lightweight and serves only as a strongly consistent configuration key-value store. Both Consul and Etcd are written in Golang while Zookeeper and Eureka are written in Java.

Manifest Specification Formats

The manifest can be specified on either YAML or JSON. One of the benefits of specifying in YAML is to share the configuration through YAML anchors and aliases, which provides the ability to reference other data objects. With this referencing, it is possible to write recursive data in the YAML file. On the other hand, JSON has better serialization-deserialization support across languages and is more suited for API transport.

Apart from the manifest, these technical configurations can also be directly handled in a separate application like Settings microapp from GE Predix, but that is more suited to configure business settings and preferences, which is discussed in detail next.

Settings and Preferences

Settings and Preferences encompass the specific requirements of a business, catering to various levels such as tenants, applications, and users. These settings primarily include format settings like time format, currency format, and metric format, as well as internationalization and localization settings. Additionally, preference settings allow users to customize their dashboard display arrangement. It is advisable to store these settings in a datastore to ensure their persistence during platform and version upgrades, as well as their availability for cross-application and cross-user sharing.

Guidelines

To effectively manage Settings and Preferences, the following guidelines should be followed:

● Settings and Preferences should be stored and maintained in the datastore, ensuring their centralized management.

● Settings and Preferences should be able to be set in the application itself, allowing for customization.

● Settings and Preferences should be configurable at multiple levels i.e. tenant-specific, application-specific, and user-specific.

● Settings and Preferences should have standard defaults, initialized as the application or platform constants to ensure consistency.

● Settings and Preferences should be persisted cross-deployment and application versions to maintain continuity and avoid data loss.

Realization

They can be more application-centric like settings for Django Application or application-centric like GE Predix Microapp Settings. For example, these can be the common settings based on the above:

● Timezone

● Country

● Locale

● Currency

● Units of Measurement

● Input and Output Formats (Date, DateTime)

● Separators (Thousand, Decimal)

● Symbols (Language, Currency)

● Themes (Links, Display)

● Display Arrangement and Ordering

Architecture

The preferences can be implemented as a three-layer hierarchy similar to read-write-execute settings for user-groups-others on Linux.

We can have a similar set of actors in the system:

1. Admin

2. User

3. Groups

4. Others (default)

Similarly, we can have a similar set of principals in the system:

1. System (default)

2. Tenant

3. Application

We have the following actions on Settings and preferences in the system:

1. Read

2. Write

3. Delete

The permissions of the actors over the principal’s configuration will follow these permission-based restrictions, where R stands for Read, W for Write, and D for Delete :

Configuration Permission Model

For actors, Admin has the highest privileges and can read, write or delete the configuration for all except System, which holds default values. Users and user groups can only change their specific application settings and configurations. Anonymous Unauthenticated users can just read the application configurations. For principals, the application configuration will take priority over tenant configuration, and the tenant configuration will take priority over System configuration, which will be set of smart defaults. Here, priority implies an inheritance model, where the application will override tenant configuration and will inherit it, if it is not set locally. Similarly, tenants will override system configuration and will inherit it as it is, if the admin does not configure the tenants.

SaaS Privilege-based Configuration Model

The ability to configure settings and preferences at different levels, such as tenant-specific, application-specific, and user-specific, provides a high degree of customization and personalization. This flexibility allows your application to cater to diverse user needs and preferences, enhancing the overall user experience.

Epilogue

As you continue your journey towards building a highly scalable and flexible system, it is crucial to emphasize the importance of following the above guidelines for building application configurability.

The importance of technical configuration becomes even more evident when considering the challenges associated with cloud enterprise software and SaaS solutions. With the adoption of microservices and a distributed architecture, the management of multiple servers, caches, and databases can become complex. Proper configuration management simplifies this process, allowing you to effectively allocate resources and optimize performance.

Furthermore, storing and maintaining settings and preferences (business configuration) in a datastore ensures their persistence across platform and version upgrades. This not only saves time and effort but also enables seamless sharing of configurations across different applications and users. Having standard defaults and initialization as the application or platform constants ensures consistency and ease of use. By adhering to these guidelines, you ensure that your application can adapt to varying requirements and scenarios.

In conclusion, by following the guidelines for building application configurability, you empower your SaaS solution with the necessary flexibility, adaptability, and scalability.

Solve the problem once, use the solution everywhere!
[image error]

Building Configurability in SaaS was originally published in Technopreneurial Treatises on Medium, where people are continuing the conversation by highlighting and responding to this story.

 •  0 comments  •  flag
Share on Twitter
Published on June 10, 2023 23:25

May 25, 2023

The Right Approach to Effective Code Reviews

Prologue

So, you have inculcated the “right attitude” for software engineering after reading my old article — The Right Approach to Software Engineering but you are working in a silo in a small team or a startup, churning a lot of code. You start wondering whether you are making the right architectural decisions, producing good-quality code, and properly, maintaining the software, that you are building. You often face the same set of operational issues after deploying, which makes you revisit your repository, relook at the code, check the code quality using code analyzers like Codacy and SonarQube, and then, realize the problem:

Problem

You have been churning out poor-quality code, with the same set of design issues, inconsistent style, and bugs, all adding up to a lot of technical debt. Now, you want to improve your technical know-how, improve the software quality as well as reduce the technical debt of the code, you are churning out.

Software Engineers, when working in silos end up producing poor-quality code and struggle with learning patterns and anti-patterns of software design. Without the right guidance and review of your common design issues, errors, and bugs, you end up making the same code quality mistakes again and again, which eventually adds to the technical debt and poor software maintainability.

Solution

If you are working in a team, you should definitely do code reviews to improve both your own skills and the quality of the codebase and foster knowledge sharing and collaboration, contributing to a positive development culture.

Let’s understand what, why, and how of effective code reviews.

What is code review?Code Mocking — An Engineering Tradition (credits: Dilbert comics)

Code review is a process in software development where one or more developers systematically examine and evaluate the code written by their peers. It involves reviewing the code for quality, correctness, adherence to coding standards, best practices, and overall maintainability. Code reviews are typically conducted before merging code changes into a shared codebase, aiming to catch bugs, improve code design, identify potential issues, and ensure that the code meets the requirements and expectations of the project. The goal of code review is to enhance the overall code quality, promote collaboration and learning within the development team, and ultimately deliver reliable and maintainable software.

Why do code reviews?Wanna see the code (credits: xkcd comics)

Code reviews serve several important purposes in software development:

1. Bug Detection: Code reviews help identify bugs, logic errors, and other issues in the code before they reach production. Reviewers can catch mistakes that the original developer might have missed, leading to higher quality and more reliable software.

2. Knowledge Sharing and Learning: Code reviews provide an opportunity for developers to learn from each other. Reviewers can share their expertise, suggest improvements, and introduce new techniques or best practices. It promotes collaboration within the team and helps in spreading knowledge across the organization.

3. Code Quality and Consistency: Code reviews help maintain a high level of code quality and consistency throughout the codebase. By following coding standards, best practices, and design patterns, reviewers can ensure that the codebase is maintainable, readable, and adheres to the established guidelines.

4. Continuous Improvement: Code reviews foster a culture of continuous improvement. Through constructive feedback and discussions, developers can refine their coding skills, enhance their understanding of the project, and identify areas where they can improve their code.

5. Team Building and Accountability: Code reviews encourage collaboration and team building within the development team. It creates a shared responsibility for the codebase, as developers review each other’s work and provide feedback. It also helps establish a sense of accountability, as developers are held responsible for the quality and reliability of their code.

Overall, code reviews contribute to better software quality, improved developer skills, and a more cohesive and efficient development process.

It’s better to succeed together than to fail alone.
How to comment the code?Bad code (credits: xkcd comics)

Commenting code is an essential practice in software development as it helps improve code readability, maintainability, and understanding. Here are some guidelines for effective code commenting:

1. Use Clear and Concise Comments: Write comments that are easy to understand and concise. Avoid ambiguous or vague comments that can lead to confusion. Comment only where necessary and focus on explaining the intent or purpose of the code.

2. Comment Purpose and High-Level Logic: Comment on the overall purpose and functionality of the code block or function. Explain the high-level logic and any important decisions made during the implementation. This helps other developers understand the code’s intent without diving into the implementation details.

3. Document Complex or Non-Obvious Code: If the code contains complex algorithms, non-trivial logic, or intricate business rules, provide comments to explain how it works. This helps other developers, including your future self, comprehend the code more easily.

4. Use Self-Explanatory Variable and Function Names: Naming variables and functions descriptively can eliminate the need for excessive comments. When code is self-explanatory, it becomes easier to understand without relying heavily on comments.

5. Comment Tricky or Non-Standard Code: If you encounter code that deviates from standard practices or uses workarounds, provide comments to explain the reason behind it. This helps prevent others from mistakenly considering it as an error or making unnecessary changes.

6. Update Comments with Code Changes: When you modify code, ensure that you update the corresponding comments to reflect the changes accurately. Outdated comments can be misleading and cause confusion.

7. Avoid Redundant Comments: Avoid commenting on obvious code or duplicating information that is already apparent from the code itself. Redundant comments can clutter the code and make it harder to read.

8. Use Comment Styles Consistently: Follow consistent comment styles and formatting conventions throughout the codebase. This helps maintain a unified and organized appearance.

Remember that comments should complement the code and provide valuable information that enhances understanding. Well-placed and meaningful comments can significantly improve the readability and maintainability of your codebase.

There are only two hard things in Computer Science: cache invalidation and naming things.
How can it help me as a developer to do code reviews of other developers?Code Quality (credits: xkcd comics)

Performing code reviews of other developers’ work can greatly benefit you as a developer in several ways:

1. Enhancing Code Quality: Code reviews provide an opportunity to identify and address potential issues, bugs, or inefficiencies in the codebase. By reviewing other developers’ code, you can catch mistakes or suggest improvements, leading to higher code quality.

2. Learning Opportunities: Code reviews expose you to different coding styles, techniques, and problem-solving approaches. You can learn from the strengths and weaknesses of others’ code, gaining insights into new patterns, best practices, and innovative solutions.

3. Collaboration and Teamwork: Code reviews promote collaboration and foster a sense of shared responsibility within the development team. Through the review process, you can engage in constructive discussions, share knowledge, and work together to improve the overall codebase.

4. Consistency and Standards: Code reviews help maintain consistency and adherence to coding standards across the project. By reviewing code, you can ensure that the codebase follows established guidelines, naming conventions, formatting rules, and other project-specific requirements.

5. Knowledge Sharing: Code reviews facilitate knowledge sharing among team members. As you review the code, you can provide explanations, suggest alternative approaches, and offer guidance. This sharing of knowledge benefits both the developer whose code is being reviewed and the reviewer.

6. Identifying Patterns and Anti-patterns: Code reviews allow you to identify recurring patterns or anti-patterns in the codebase. By recognizing such patterns, you can propose refactoring opportunities, suggest code reuse, or identify areas where design patterns can be applied.

7. Error Prevention: Code reviews act as a preventive measure against introducing bugs or issues into the codebase. By thoroughly reviewing code, you can identify potential pitfalls, evaluate edge cases, and spot logic errors before they reach the production environment.

8. Continuous Improvement: Code reviews promote a culture of continuous improvement within the development team. By providing constructive feedback and suggestions, you contribute to the growth of individual developers and the overall team’s skills.

By actively participating in code reviews, you can contribute to a positive development culture and improve both your own skills and the quality of the codebase.

The more you help people, the stronger you get.
How should I do code reviews? How to learn it?Code Quality 2 (credits: xkcd comics)

To effectively conduct code reviews, consider the following steps and tips:

1. Set clear expectations: Establish guidelines and standards for code reviews within your team or organization. Define the goals of code reviews, the scope of review, and the expected timeline.

2. Review the code independently: Start by examining the code on your own, without any distractions. Understand the purpose, functionality, and design choices made in the code.

3. Follow a checklist: Use a checklist or a set of predefined criteria to ensure thoroughness and consistency in your code reviews. This can include aspects like code style, readability, performance, security, error handling, and adherence to best practices. For example, check for code smells and then, resolve code smells using refactoring techniques.

4. Provide constructive feedback: When you identify areas for improvement, offer clear and specific feedback. Explain the issues you’ve found and suggest possible solutions or alternative approaches.

5. Prioritize and categorize feedback: Differentiate between critical issues that need immediate attention and minor suggestions for improvement. Organize your feedback to help the developer understand the importance and impact of each comment.

6. Be objective and impartial: Focus on the quality of the code and adherence to established standards, rather than personal preferences. Base your feedback on objective criteria and best practices.

7. Balance between nitpicking and high-level feedback: While it’s important to catch small issues, also provide feedback on the overall design, architecture, and problem-solving approach. Find the right balance between detailed suggestions and high-level guidance.

8. Foster collaboration and learning: Code reviews should be seen as an opportunity for growth and knowledge sharing. Encourage open discussions, ask questions, and be receptive to different perspectives. Promote a culture of continuous learning and improvement.

9. Document your feedback: Keep a record of the feedback you provide, either through comments in the code review tool or in a separate document. This helps track progress and allows developers to refer back to the feedback.

10. Learn from others: Participate in code reviews of your peers and learn from their feedback. Observe how experienced reviewers provide suggestions and explanations. Seek feedback on your own code to enhance your skills and understanding.

11. Practice and seek feedback: The more you engage in code reviews, the better you’ll become. Practice regularly and seek feedback from both developers and reviewers to improve your code review skills.

12. Learn from resources and training: Explore online resources, articles, books, and courses on code review best practices. Some organizations also offer internal training or mentorship programs to enhance code review skills. Take advantage of these resources to deepen your knowledge.

By following these steps and continuously learning and practicing from the experience, you can become an effective code reviewer.

Practice makes a man perfect.
How to convey messages politely in code reviews?Code Quality 3 (credits: xkcd comics)

When conveying messages politely in code reviews, it’s important to consider the following guidelines:

1. Use a friendly and constructive tone: Frame your feedback in a positive and encouraging manner. Instead of simply pointing out flaws, offer suggestions for improvement.

2. Start with positive feedback: Begin by acknowledging the strengths or good aspects of the code. This helps create a positive atmosphere and shows appreciation for the developer’s efforts.

3. Be specific and objective: Clearly articulate the issues you’ve identified or the improvements you suggest. Use specific examples from the code to illustrate your points. Avoid vague or general statements.

4. Focus on the code, not the developer: Critique the code itself rather than to criticize the developer personally. Remember that the goal is to improve the code, not to attack the individual who wrote it.

5. Offer alternatives or solutions: Instead of simply pointing out problems, provide alternative approaches or solutions. This demonstrates your willingness to help and encourages collaboration.

6. Use constructive language: Frame your feedback in a way that promotes learning and growth. Use phrases like “Have you considered…” or “What if we tried…” to encourage dialogue and exploration of different options.

7. Be respectful and empathetic: Keep in mind that behind every line of code is a developer who invested time and effort. Show empathy and respect for their work while providing suggestions for improvement.

8. Provide context and explanations: When suggesting changes or improvements, explain the reasoning behind your suggestions. Help the developer understand why a certain approach may be more beneficial.

9. Focus on the bigger picture: Consider the overall goals of the project and the team. Align your feedback with those goals and emphasize how the suggested changes contribute to the project’s success.

10. Follow up with support: Offer assistance and clarification if the developer has questions or needs further guidance. Engage in a constructive dialogue to ensure that the feedback is well understood.

By conveying messages politely and constructively, you create a positive and collaborative environment for growth and development.

There is a very thin line between hand-holding and spoon-feeding. Spoon feed an infant, but hand hold an adult.
Epilogue

So, to be an effective software engineer, it is necessary to be an effective code reviewer and also, get your code reviewed. In quality control, Four competent eyes are always better than two. After all, software engineering is a collaborative and creative learning process, not a siloed labor.

Being a great engineer is all about having the right attitude.
[image error]

The Right Approach to Effective Code Reviews was originally published in Technopreneurial Treatises on Medium, where people are continuing the conversation by highlighting and responding to this story.

 •  0 comments  •  flag
Share on Twitter
Published on May 25, 2023 12:02

May 21, 2023

Picking the Event Store for Event Sourcing

PrologueBuilding event-driven architecture

So, you have decided to go the event-driven route of building your enterprise application using the event-sourcing paradigm after reading my old article — Architecting Event-driven Systems the right way but for deciding on the core of your application — the datastore, you start contemplating which will be the right datastore for your architecture based on your requirements. You revisit the possible database choices, relook at the business requirements, and then, comes the problem:

Problem

There are multiple requirements for Event Store and thus, different database strategies for using event sourcing for such requirements. How can you ahead and choose the right database based on the event-sourcing requirement?

Before we dive deep into the requirements, let’s go through the basic terms used in event sourcing.

Concepts

State — A state is a description of the status of a system waiting to execute a transition.

Transition — A transition is a change from one state to another. A transition is a set of actions to execute when a condition is fulfilled or an event is received.

Domain Event — The entity that drives the state changes in the business process.

Aggregates — Cluster of domain objects that can be treated as a single unit, modeling a number of rules (or invariants) that always should hold in the system. Can also, be called “Event Provider” as “Aggregate” is really a domain concept and an Event Storage could work without a domain.

Aggregate Root — Aggregate Root is the mothership entity inside the aggregate controlling and encapsulating access to its members in such a way as to protect its invariants.

Invariant — An invariant describes something that must be true within the design at all times, except during a transition to a new state. Invariants help us to discover our Bounded Context.

Bounded Context — A Bounded Context is a system boundary designed to solve a particular domain problem. The bounded context groups together a model that may have 1 or many objects, which will have invariants within and between them.

Requirements

1. Sequential Durable Write — Be able to store state changes as a sequence of events, chronologically ordered.

2. Domain Aggregate Read — Be able to read events of individual aggregates, in the order they persisted.

3. Sequential Read — Be able to read all events, in the order they were persisted.

4. Atomic Writes — Be able to write a set of events in one transaction, either all events are written to the event store or none of them are.

Comparison of Datastores

Let’s walk through different database types and dive deep into how they will support the above requirements.

Relational Database

Event Store Implementation in Relational Database

In a relational database, event sourcing can be implemented with only two tables, one to store the actual Event Log, storing one entry per event in this table and the other to store the Event Sources. The event itself is stored in the [Data] column. The event is stored using some form of serialization, for the rest of this discussion the mechanism will be assumed to be built-in serialization although the use of the memento pattern can be highly advantageous.

A version number is also stored with each event in the Events Table as an auto-incrementing integer. Each event that is saved has an incremented version number. The version number is unique and sequential only within the context of a given Event Source (aggregate). This is because Aggregate Root boundaries are consistency boundaries. On the other hand, Sequence, the other auto-incrementing integer is also, unique and sequential and stores the global sequence of events based on TimeStamp.

The [EventSourceId] column is a foreign key that should be indexed; it points to the next table which is the Event Sources table.

The Event Sources table is representing the event source provider (aggregates) currently in the system, every aggregate must have an entry in this table. Along with the identifier, there is a denormalization of the current version number. This is primarily an optimization as it could be derived from the Events table but it is much faster to query the denormalization than it would be to query the Events table directly. This value is also used in the optimistic concurrency check. Also included is a [Type] column in the table, which would be the fully qualified name of the type of event source provider being stored.

To look up all events for an event source provider, the query will be:

SELECT Id, TimeStamp, Name, Version, EventSourceId, Sequence, Data FROM Events
WHERE EventSourceId = :EventSourceId ORDER BY Version ASC;

To insert a new event for a given event source provider, the query will be:

BEGIN;
currentVersion := SELECT MAX(version) FROM Events
WHERE EventSourceId = :EventSourceId;
currentSequence := SELECT MAX(Sequence) FROM Events;

INSERT INTO Events(TimeStamp, Name, Version, EventSourceId, Sequence, Data)
VALUES (NOW(),:Name, :currentVersion+1, :EventSourceId, :currentSequence+1, :Data);
UPDATE EventSources SET Version = currentVersion+1 WHERE EventSourceId = :EventSourceId;
COMMIT;

Key-Value Store

Event Store Implementation in Key-Value Store

In a key-value store, an event can be modeled by constructing the key as a combination of the aggregate id and the version number. The serialized event data can be stored as the corresponding value. This method of constructing the key enables the chronological storage of events for a specific aggregate. To access data from the key-value store, knowledge of the key is required. It might be feasible to retrieve events for a particular aggregate by knowing only its aggregate id and incrementing the version number until a key is not found. However, a challenge arises when attempting to retrieve events for any aggregate, as no portion of the key is known. Consequently, it becomes impossible to retrieve these events in the order they were stored. Additionally, it’s important to note that many key-value stores lack transactional capabilities.

Document Database

Event Store Implementation in Document Database

Using a document database allows for the storage of all events associated with an aggregate within a single document, or alternatively, each event can be stored as an individual document. To maintain chronological order, a version field can be included in the document to store the events. When all events pertaining to an aggregate are stored in the same document, a query can be executed to retrieve them in the order of their version numbers. As a result, the returned events will maintain the same order as they were stored.

However, if multiple events from different aggregates are stored within a single document, it becomes challenging to retrieve them in the correct sequence. One possible approach is to retrieve all events and subsequently sort them based on a timestamp, but this method would be inefficient. Thankfully, most document databases support ACID transactions within a document, enabling the writing of a set of events within a single transaction.

Column Oriented Database

Event Store Implementation in Column-oriented Database

In a column-oriented database, events can be stored as columns, with each row containing all events associated with an aggregate. The aggregate id serves as the row key, while the events are stored within the columns. Each column represents a key-value pair, where the key denotes the version number and the value contains the event data. Adding a new event is as simple as adding a new column, as the number of columns can vary for each row.

To retrieve events for a specific aggregate, the row key (aggregate id) must be known. By ordering the columns based on their keys, the events can be collected in the correct order. Writing a set of events in a row for an aggregate involves storing each event in a new column. Many column-oriented databases support transactions within a row, enabling the writing process to be performed within a single transaction.

However, similar to document databases, column-oriented databases face a challenge when attempting to retrieve all events in the order they were stored. There is no straightforward method available for achieving this outcome.

Comparison of Specific Datastore Implementations

Postgres

While Postgres is commonly known as a relational database management system, it offers features and capabilities that make it suitable for storing events in an Event Sourcing architecture.

Postgres provides a flexible data model that allows you to design a schema to store events efficiently. In Event Sourcing, you typically need an Events table that includes fields such as a global sequence number, aggregate identifier, version number per aggregate, and the payload data (event) itself. By designing the schema accordingly, you can easily write queries that efficiently perform full sequential reads or search for all events related to a specific event source (aggregate) ID.

Postgres also offers transactional support, ensuring ACID (Atomicity, Consistency, Isolation, Durability) guarantees. This means that events can be written and read in a transactional manner, providing data integrity and consistency.

The append-only policy, a fundamental aspect of Event Sourcing, can be enforced in Postgres by avoiding UPDATE or DELETE statements on event records. This way, events are stored chronologically as new rows are inserted into the table.

Additionally, Postgres is a mature technology with wide adoption and a strong ecosystem of tools and libraries. It provides a range of performance optimization techniques, indexing options, and query optimization capabilities, enabling efficient retrieval and processing of events.

While Postgres may not offer some of the specialized features specific to event-sourcing databases, with careful design and optimization, it can serve as a reliable and scalable event store for Event Sourcing architectures.

Cassandra

Cassandra can be used as an event store for Event Sourcing, but there are some considerations to keep in mind.

Cassandra is a highly scalable and distributed NoSQL database that offers features such as peer-to-peer connections and flexible consistency options. It excels in handling large amounts of data and providing high availability.

However, when it comes to guaranteeing strict sequencing or ordering of events, Cassandra may not be the most efficient choice. Cassandra’s data model is based on a distributed and partitioned architecture, which makes it challenging to achieve strong sequencing guarantees across all nodes in the cluster. Ensuring strict ordering typically requires leveraging Cassandra’s lightweight transaction feature, which can impact performance and should be used judiciously as per the documentation’s recommendations.

In an Event Sourcing scenario where appending events is a common operation, relying on lightweight transactions for sequencing events in Cassandra may not be ideal due to the potential performance cost.

While Cassandra may not provide native support for strong event sequencing guarantees, it can still be used effectively as an event store by leveraging other mechanisms. For example, you can include a timestamp or version number within the event data to order events chronologically during retrieval.

Additionally, Cassandra’s scalability and fault-tolerance features make it suitable for handling large volumes of events and ensuring high availability for event-driven architectures.

MongoDB

MongoDB, a widely used NoSQL document database, offers high scalability through sharding and schema-less storage. It allows easy storage of events due to its schema-less nature.

MongoDB’s flexibility and schema-less nature allow for easy storage of event data. You can save events as documents in MongoDB, with each document representing an individual event. This makes it straightforward to store events and their associated data without the need for predefined schemas.

To ensure the ordering and sequencing of events, you can include a timestamp or version number within each event document. This enables you to retrieve events in the order they were stored, either by sorting on the timestamp or using the version number.

MongoDB also provides high scalability through sharding, allowing you to distribute your event data across multiple servers to handle large volumes of events.

However, there are certain considerations when using MongoDB as an event store. MongoDB traditionally supported only single-document transactions, which could limit the atomicity and consistency of operations involving multiple events. However, MongoDB has introduced support for multi-document transactions, which can help address this limitation.

Another challenge with MongoDB as an event store is the lack of a built-in global sequence number. To achieve a full sequential read of all events, you would need to implement custom logic to maintain the sequence order.

Additionally, MongoDB’s ad hoc query capabilities and scalability make it a strong choice for event storage. However, transactional support and event-pushing performance may require careful consideration and optimization.

Kafka

Kafka is often mentioned in the context of Event Sourcing due to its association with the keyword “events.” Using Kafka as an event store for Event Sourcing is a topic of debate and depends on specific requirements and trade-offs. Kafka’s design as a distributed streaming platform makes it a popular choice for building event-driven systems, but there are considerations to keep in mind.

Kafka’s core concept revolves around the log-based storage of events in topics, providing high throughput, fault tolerance, and scalable message processing. It maintains an immutable, ordered log of events, which aligns well with the append-only nature of event sourcing.

However, using Kafka as an event store for Event Sourcing introduces some challenges. One key consideration is the trade-off between storing events for individual aggregates in a single topic or creating separate topics for each aggregate. Storing all events in a single topic allows full sequential reads of all events but makes it more challenging to efficiently retrieve events for a specific aggregate. On the other hand, storing events in separate topics per aggregate can optimize retrieval for individual aggregates but may pose scaling challenges due to the design of topics in Kafka.

To address this, various strategies can be employed. For example, you can create a topic per aggregate and use partitioning to distribute the events across partitions. However, ensuring an even distribution of entities across partitions and handling the restoration of global event order can introduce additional complexities.

Another consideration is access control and security. When using Kafka as an event store, it’s important to manage access to the Kafka instance to ensure data privacy and integrity since anyone with access can read the stored topics.

Furthermore, it’s worth noting that using Kafka as an event store deviates from the traditional Event Sourcing principle of storing events before publishing them. With Kafka, storing and publishing events are not separate steps, which means that if a Kafka instance fails during the process, events may be lost without knowledge.

EventStore

“EventStore” is a purpose-built database that aligns well with the principles and requirements of Event Sourcing. It is a popular option written in .NET and is well-integrated within the .NET ecosystem.

Event Store, as the name suggests, focuses on efficiently storing and managing events. It offers features and capabilities that are well-suited for event-driven architectures and Event Sourcing implementations.

One key feature of Event Store is its ability to handle projections or event-handling logic within the database itself. This allows for efficient querying and processing of events, enabling developers to work with events in a flexible and scalable manner.

Event Store provides the necessary mechanisms to store events in an append-only fashion, ensuring that events are immutable and ordered. It allows you to store events for different aggregates or entities in a single stream, making it easy to retrieve events for a specific aggregate in the order they were stored.

Additionally, Event Store offers strong transactional support, allowing for ACID (Atomicity, Consistency, Isolation, Durability) guarantees when working with events. This ensures that events are persisted reliably and consistently, maintaining data integrity.

Event Store also provides features for event versioning, enabling compatibility and evolution of event schemas over time. This is important for maintaining backward compatibility and handling changes to event structures as the application evolves.

Furthermore, Event Store typically offers built-in features for event publishing and event subscription mechanisms, facilitating event-driven communication and integration within an application or across microservices.

However, a key feature of “EventStore” is that projection (or event handling) logic is placed and executed within the Event Store itself using Javascript. While this is a tempting proposition, it diverges from our view that the Event Store should store events, and that projection logic should be handled by the consumers themselves. This allows us a greater degree of flexibility over how to handle our events and not being limited to the functionality of Javascript logic available in the “EventStore” database.

Redis Streams

Redis Streams differs from traditional Redis Pub/Sub and functions more like an append-only log file. It shares conceptual similarities with Apache Kafka.

Redis Streams allow you to append new events to a stream, ensuring that they are ordered and stored sequentially. Each event in the stream is associated with a unique ID, providing a way to retrieve events based on their order of arrival. Additionally, Redis Streams support multiple consumers, enabling different components or services to consume events from the same stream.

When using Redis Streams as an event store, you can store event data along with any necessary metadata. This allows you to capture the essential details of an event, such as the event type, timestamp, and data payload. By leveraging the features provided by Redis Streams, you can efficiently publish, consume, and process events in a scalable manner.

It’s important to note that while Redis Streams offer a convenient and performant solution for event storage, there are considerations to keep in mind. For example, ensuring data durability and persistence may require additional configuration and replication mechanisms. Additionally, access control and security measures should be implemented to safeguard the event data stored in Redis Streams.

EpilogueComparison of Datastores across Event Store Requirements

In conclusion, each data store has its strengths and limitations for Event Sourcing. It is essential to consider factors such as scalability, consistency, sequencing, transactionality, and query support when selecting a suitable data store for an event-sourced system.

There Ain’t No Such Thing As A Free Lunch.
[image error]

Picking the Event Store for Event Sourcing was originally published in Technopreneurial Treatises on Medium, where people are continuing the conversation by highlighting and responding to this story.

 •  0 comments  •  flag
Share on Twitter
Published on May 21, 2023 08:59

August 20, 2022

Standardizing RESTful APIs

Consistent, hierarchical, and modular APIs

Prologue

So, you have decided to build your enterprise application as loosely coupled micro-services, even thinking of making them serverless after reading my old article — Serverless Microservices and after coming across REST API, you have decided to build your application APIs conforming to REST architectural style. Good Job! Basically, you have adopted the microservices architecture, where each piece of your application performs a smaller set of services and runs independently of the other part of the application and operates in its own environment and stores its own data and communicates/interacts with other services via REST API. But, then you have multiple REST APIs in your services to take care of and provide to the customers, especially if you are opening your services to third-parties. So here comes the problem:

The Problem

When you let your micro-service teams to adopt to REST, they all come up with their own standards and conventions of doing things. Eventually, there is pandemonium, when clients and customers are not able to follow the REST APIs, designed for your enterprise services as each API is designed in its own unique style. When you, as an architect or team lead go ahead to discuss this problem with your micro-services development team, you get the following response.

Also, as your team adds more REST services and glue them together, you start worrying more about their standardisation and presentation as you never had put any convention in place. Now, you try to realize the problem of standardizing your RESTful APIs and wished for better and sane practices from the start. After all, migrating all your clients and customers on the new standard version of APIs won’t be an easy task.

The Definitions

You look at the REST API specification from the first principles and start by understanding what all its underlying syntax is customisable and can be standardized.

API — An API is a set of definitions and protocols for building and integrating application software, referred to as a contract between an information provider and an information user — establishing the content required from the consumer (the call) and the content required by the producer (the response).

The advantage of using APIs is that as a resource consumer, you are decoupled from the producer and you don’t have to worry about how the resource is retrieved or where it comes from. It also helps an organization to share resources and information while maintaining security, control, and authentication — determining who gets access to what.

REST — REST stands for REpresentational State Transfer and was created by computer scientist Roy Fielding. REST is a set of architectural constraints, not a protocol or a standard. When a client request is made via a RESTful API, it transfers a representation of the state of the resource to the requester or endpoint in one of several formats via HTTP: JSON, HTML, XML, plain text etc.

In order for an API to be considered RESTful, it has to conform to these criteria:

A client-server architecture made up of clients, servers, and resources, with requests managed through HTTP. Stateless client-server communication, meaning no client information is stored between get requests and each request is separate and unconnected. Cacheable data that streamlines client-server interactions.A uniform interface between components so that information is transferred in a standard form. This requires that:resources requested are identifiable and separate from the representations sent to the client.resources can be manipulated by the client via the representation they receive because the representation contains enough information to do so.self-descriptive messages returned to the client have enough information to describe how the client should process it.hypertext/hypermedia is available, meaning that after accessing a resource the client should be able to use hyperlinks to find all other currently available actions they can take.

5. A layered system that organizes each type of server (those responsible for security, load-balancing, etc.) involved the retrieval of requested information into hierarchies, invisible to the client.

6. Code-on-demand (optional): the ability to send executable code from the server to the client when requested, extending client functionality.

API developers can implement REST in a variety of ways, which sometimes leads to chaos, especially when the syntactic schema for REST across multiple API development teams are not aligned and standardised. So, in next sections, the evaluation criteria for evaluating and suggestions to standardize REST APIs is presented to evade this chaos.

REST API Evaluation Criteria

The REST APIs should be holistically evaluated and improved based on the following criteria:

Resource Oriented DesignStandard MethodsCustom MethodsStandard Fields and Query ParametersSuccess & ErrorsNaming ConventionsImportant PatternsResource Oriented Design

The API should define a hierarchy, where each node is either a collection or a resource.

● A collection contains a list of resources of the same type. For example, a device type has a collection of devices.

● A resource has some state and zero or more sub-resources. Each sub-resource can be either a simple resource or a collection. For example, a device resource has a singleton resource state (say, on or off) as well as a collection of changes (change log).

A specific use case, the singleton resource can be used when only a single instance of a resource exists within its parent resource (or within the API, if it has no parent).

Here is a suggestion for simple and consistent API hierarchy:

Collection : device-types Resource: device-types/{dt-id} Singleton Resource: device-types/{dt-id}/state-machine
Collection: device-types/{dt-id}/attributes Resource: device-types/{dt-id}/attributes/{attribute-id}
Collection: device-types/{dt-id}/changes Resource: device-types/{dt-id}/changes/{change-id}
Collection: device-types/{dt-id}/devices Resource: device-types/{dt-id}/devices/{d-id} Singleton Resource: device-types/{dt-id}/devices/{d-id}/state Custom Method: device-types/{dt-id}/devices/{d-id}/state:transition
Collection: device-types/{dt-id}/devices/{d-id}/changes Resource: device-types/{dt-id}/devices/{d-id}/changes/{change-id}s

Note that in the above, id can be string for name, number or even UUID based on agreed convention. Example:

https://tenant.staging.saas.com/api/v1/device-types/house-alarm/devices/cbb96ec2-edae-47c4-87e9-86eb8b9c5ce4sStandard Methods

The API should support standard methods for LCRUD (List, Create, Read, Update and delete) on the nodes in the API hierarchy.

The common HTTP methods used by most RESTful web APIs are:

GET retrieves a representation of the resource at the specified URI. The body of the response message contains the details of the requested resource. POST creates a new resource at the specified URI. The body of the request message provides the details of the new resource. Note that POST can also be used to trigger operations that don’t actually create resources. PUT either creates or replaces the resource at the specified URI. The body of the request message specifies the resource to be created or updated. PATCH performs a partial update of a resource. The request body specifies the set of changes to apply to the resource. DELETE removes the resource at the specified URI.

The following table describes how to map standard methods to HTTP methods:

1. Standard Method : List

HTTP Mapping: GET

HTTP Request Body: NA

HTTP Response Body: Resource* list

2. Standard Method : Read

HTTP Mapping: GET

HTTP Request Body: NA

HTTP Response Body: Resource*

3. Standard Method : Create

HTTP Mapping: POST

HTTP Request Body: Resource

HTTP Response Body: Resource*

4. Standard Method : Update

HTTP Mapping: PUT or PATCH

HTTP Request Body: Resource

HTTP Response Body: Resource*

5. Standard Method : Delete

HTTP Mapping: DELETE

HTTP Request Body: NA

HTTP Response Body: NA

Based on the requirements, some or all of the above API methods for the node hierarchy should be supported. Note that the * marked resource data will be encapsulated inside the response body format containing status, request and data.

Here are the differences between POST, PUT, and PATCH for their usage in REST:

A POST request creates a resource. The server assigns a URI for the new resource, and returns that URI to the client. In the REST model, POST request is generally applied to collections and the new resource is added to the collection. A POST request can also be used to submit data for processing to an existing resource as a custom method, without any new resource being created.A PUT request creates a resource or updates an existing resource. The client specifies the URI for the resource to be created or updated with the complete representation of the resource. If a resource with this URI already exists, it is replaced, else a new resource is created, if the server supports doing so. Whether to support creation via PUT depends on whether the client can meaningfully assign a URI to a resource before it exists. If not, then use POST to create resources and PUT or PATCH to update.A PATCH request performs a partial update to an existing resource. The client specifies the URI for the resource with the set of changes to apply to the resource. This can be more efficient than using PUT, because the client only sends the changes, not the entire representation of the resource. Technically, PATCH can also create a new resource (by specifying a set of updates to a “null” resource), if the server supports this but again, that is an anti-pattern.

PUT requests must be idempotent but POST and PATCH requests are not guaranteed to be idempotent. If a client submits the same PUT request multiple times, the results should always be the same (the same resource will be modified with the same set of values).

Custom Methods

Custom methods refer to API methods besides the above 5 standard methods for functionality that cannot be easily expressed via standard methods. One of the custom functionality is the state transition of devices based on API requests. The corresponding API can be modelled either of the following ways:

1. Based on Stripe invoice workflow design: Use / to separate the custom verb. Note that this might confuse it with resource noun.

https://tenant.staging.saas.com/api/v1/device-types/house-alarm/devices/cbb96ec2-edae-47c4-87e9-86eb8b9c5ce4s/state/ring

2. Based on Google Cloud API design: Use : instead of / to separate the custom verb from the resource name so as to support arbitrary paths.

https://tenant.staging.saas.com/api/v1/device-types/house-alarm/devices/cbb96ec2-edae-47c4-87e9-86eb8b9c5ce4s/state:ring

In either of the above ways, the API should use HTTP POST verb since it has the most flexible semantics.

Standard Fields and Query Parameters

Resources may have the following standard fields:

idcreatedAtcreatedByupdatedAtupdatedBynamedisplayNametimeZoneregionCodelanguageCode

Note that displayName, timeZone, regionCode, languageCode etc are useful, when you want to provide localizations in your API.

Collections may have also have standard fields like totalCount in metadata.

Collections List API may have the following standard query parameters (with alternate names):

filter/queryorderByselect/fields/viewlimit/top/pageSizeoffset/startformat (json, xml)validate

The standard query parameters can be separated from custom query parameters by preceding them with $. Example:

https://tenant.staging.saas.com/api/v1/device-types/house-alarm/devices?$orderBy=volume&owner=jaykmr&$format=jsonSuccess & Errors

Success & Errors across all the methods should be consistent, i.e. have same standard structure, for example:

{
"status":{ "code":"", "description":"", "additionalInfo":"" }, "request":{ "id":"", "uri":"", "queryString":"", "body":"" }, "data":{ "meta":{ "totalCount":"", ... }, "values":{ "id":"", "url":"", ... } }}

All the API should have a common response structure and this can be achieved by using a common response formatter in the code for resource methods. Note, in case of success, when no data is returned, the API response can either return empty list [] for collection or empty object {} for resource, while in case of error, can just return data as null to keep a consistent response schema across methods.

Naming Conventions

Here are my suggestions on the naming conventions without the intention of provoking tabs vs spaces kind of debate:

● Collection and Resource names should use unabbreviated plural form and kebab case.

● Field names and query parameters should use lowerCamel case.

● Enums should use Pascal case names.

● Custom Methods should use lowerCamel case names. (example: batchGet)

There are multiple good suggestions like Google API Naming Convention but this depends on the organization, however whatever the organization chooses and adopts, they should be aligned across all the teams and strictly adhered to.

Important Patterns

List Pagination: All List methods over collections should support pagination using the standard fields, even if the response result set is small.

The API pagination can be supported in 2 ways:

Limit Offset Pagination using (offset, limit)Cursor Pagination using (limit, cursor_next_link)

The cursor next link makes the API really RESTful as the client can page through the collection simply by following these links (HATEOAS). No need to construct URLs manually on the client side. Moreover, the URL structure can simply be changed without breaking clients (evolvability).

Delete Response: Return Empty data response {} in hard delete while updated resource data response in soft delete. Return Null in failures and errors.

Enumeration and default value: 0 should be the start and default for enums like state singletons and their handling should be well documented.

Singleton resources: For example, the state machine of the resource (say, device type) as well as the state of the resource (say, device) should never support the Create and Delete method as the states (ON, OFF, RING etc) can be configured i.e. Updated but not Created or Deleted.

Request tracing and duplication: All requests should have a unique requestID, like a UUID, which the server will use to detect duplication and make sure the non-idempotent request like POST is only processed once. Also, requestID will help in distributed tracing and caching. The unique requestID should also be part of the response request section.

Request Validation: Methods with side-effects like Create, Update and Delete can have a standard boolean query parameter validate, which when set to true does not execute the request but only validates it. If valid, it returns the correct status code but current unchanged resource data response, else it returns the error status code.

https://tenant.staging.saas.com/api/v1/device-types/house-alarm/devices/cbb96ec2-edae-47c4-87e9-86eb8b9c5ce4s/state:ring?$validate=true

For example, the above request will validate whether alarm can be put to ring or not.

HATEOAS (Hypertext as the Engine of Application State): Provide links for navigating through the API (especially, the resource url). For example,

{
"orderID":3,
"productID":2,
"quantity":4,
"orderValue":16.60,
"links":[
{
"rel":"customer",
"href":"https://adventure-works.com/customers/3",
"action":"GET",
"format":["text/xml","application/json"]
},
{
"rel":"customer",
"href":"https://adventure-works.com/customers/3",
"action":"PUT",
"format":["application/x-www-form-urlencoded"]
},
{
"rel":"self",
"href":"https://adventure-works.com/orders/3",
"action":"GET",
"format":["text/xml","application/json"]
},
{
"rel":"self",
"href":"https://adventure-works.com/orders/3",
"action":"PUT",
"format":["application/x-www-form-urlencoded"]
},
{
"rel":"self",
"href":"https://adventure-works.com/orders/3",
"action":"DELETE",
"format":[]
}]
}

Versioning: Versioning enables the client or the consumer to keep track of the changes, be it compatible or even, incompatible breaking changes so that it can make specific version call for consumption, which it can process.

The versioning can be supported in 3 ways:

URI versioning:https://tenant.staging.saas.com/api/v1/device-types/Query string versioning:https://tenant.staging.saas.com/api/device-types?$version=v1Header versioning:GET https://tenant.staging.saas.com/api/device-types
Custom-Header: api-version=v1Epilogue

So, you have divided the monolithic applications into microservices, all integrated by loose coupling into separate micro-applications. Now is the time to revisit your APIs, make them standardized and raise the bar of your APIs before making it external for customers. The APIs should be consistent, hierarchical and modular. The separation of methods, standard fields and patterns from the collection & resource hierarchy will allow you to build resource agnostic re-usable abstractions, which can be implemented by the resource interfaces and deployed as services.

You should even, break the frontend into micro-frontends and serve them separately to make it a complete micro-application. Refer to my previous article — Demystifying micro-frontends for such micro-services-based backend to make a complete micro-application-based architecture.

Solve the problem once, use the solution everywhere!
ReferenceGoogle API Design GuideDjango Rest Framework API GuideOData 4.0 DocumentationREST API Design Best PracticesMicrosoft API Design Guide[image error]

Standardizing RESTful APIs was originally published in Technopreneurial Treatises on Medium, where people are continuing the conversation by highlighting and responding to this story.

 •  0 comments  •  flag
Share on Twitter
Published on August 20, 2022 08:06

August 14, 2022

Architecting Event driven Systems the right way

Architecting Event-driven Systems the right wayPrologue

So, you have built your enterprise application as loosely coupled micro-services, even made them serverless after reading my old article — Serverless Microservices but after attending a few cool tech conferences and coming across the term — event-driven or event-sourcing architectures, you start wondering whether you have made the right architectural decisions. You revisit your domain problem, relook at the business requirements, and then, comes the problem:

The Problem

So, you have built your application architecture on top of Entity Relationship Model or Object Model or Attribute Value Model, using those models to capture the application state for your business problem.

Then, the audit department comes in, asking an explanation for how a particular object evolved to a particular value or may request you to revert or time-travel back to a specific state of the object in the business context. In simple words, they don’t care where the application state is now as that may have become corrupt but want to drill down to the root cause of that state corruption by figuring out how it got there by which events, as well as revert back to the last valid state of the business object.

Also, as you add more and more objects and services and glue them together, you start worrying about the scalability of any audit logs, you would create to capture the evolution of the objects in the application state.

The Concepts

You look at the problem from the first principles and start by understanding that the business logic can be modelled as event-driven state machines:

State Machine — A state machine is a mathematical abstraction used to design algorithms based on behaviour model. A state machine reads a set of inputs and changes to a different state based on those inputs.

State — A state is a description of the status of a system waiting to execute a transition.

Transition — A transition is a change from one state to another. A transition is a set of actions to execute when a condition is fulfilled or an event received.

Event — The entity that drives the state changes.

Event-driven State Machine — A state machine is event-driven, if the transition from one state to another is triggered by an event or a message (not based on consuming/parsing characters or based on conditions).

Here is a sample example of an event-driven state machine for a button application, having two states of On and Off and one action of buttonPressed:

Event Driven Systems — Event Driven Architecture is about components communicating via publishing events rather than making (e.g.) RPC/CRUD calls against each other or manipulating shared state. It’s a communication strategy (albeit one which often relies on messages being persisted for fairly long periods until they are consumed).

Implementation Paradigms for Event Driven System

Paradigm 1 — State oriented Object Model or Attribute Value Model : The implementation of event driven system can be done by capturing only the current state of the system. This naturally, is based on model state oriented persistence, that only keeps the latest version of the entity state. State is stored as dictionary or record of attribute values and the attributes are first-class citizens, which are only modified by allowed encapsulated methods as event actions.

public class button {

boolean state;


public void buttonPressed(){

state = !state;

}

public void setState(boolean state){

this.state = state;

}

public boolean getState(){

return state;

}

}

For example, the state of the button application, which was initially only stored as an attribute value of On or Off encapsulated along with the modifier action of buttonPressed as the method in the Entity Relationship Model or Object Model or Attribute Value Model. The system is still event-driven as it operates on the events but the events are not the first-class citizens of this system but rather, the current attribute value or state of the entity button, whether it is on or off is the first class citizen. The event or encapsulated method of buttonPressed is not stored but just, applied eagerly to update the current state and thus, can’t be replayed or time-travelled.

Consider another example of a shopping cart service, where the cart is represented as a record in a carts table and a cart-items table represents rows that say “2 bananas in cart 101 with price $1 a piece” in relational model. To add items to the cart, we insert (or update if it’s just a quantity change) rows to the cart-items table. When the cart is checked out, we publish a representation of the cart in json or other serialized formart and its contents so the shipping service has what’s in the cart and we set carts.checked_out to TRUE.

That’s a perfectly sane Event Driven Architecture. The cart service communicates that there’s a checked out cart to the shipping service. However, it’s not event-sourced, i.e. we can’t reconstruct the database tables from the events as we are not tracking the events.

Paradigm 2 — Event Sourcing : The implementation of event driven system can also be done by capturing only the events manifested into the system. This naturally, is based on event-sourced (using event journal) persistence, where each state/attribute mutation is stored as a separate record called as an event. Events are first-class citizens in the persistence and states and attributes are side-effects of events. We store the events, and lazily apply it to derive the state or attributes in the query resolver. Events can also be replayed or time-travelled.

enum event { BUTTON_PRESSED

}
public class event_sourced_button {
List < event > buttonEvents = new ArrayList < event > ();
public void buttonPressed() {
buttonEvents.add(event.BUTTON_PRESSED);
}
private boolean applyEvent(event e, boolean state) {
if (e == event.BUTTON_PRESSED)
state = !state;
return state; }
public boolean getState(boolean initialState) {
boolean state = initialState;
for (event e: buttonEvents)
state = applyEvent(e, state);
return state; }
}

For example, the event sourced implementation of the button application will not store the state of the button but rather, will store the events of buttonPressed on the button and will apply those events one by one to derive the current state. The system is event-driven as well as event-sourced as it operates on the events by making the events as the first-class citizens of this system, only storing them and then, deriving the current attribute value or state of the entity button, whether it is on or off from the event list. The events are just, applied lazily to evaluate the current state, given the initial state and thus, can be replayed or time-travelled upto certain time or events.

Similarly, the previous example of shopping cart service could also be made event sourced. It stores events in an event store (which could be a datastore designed for the needs of event sourcing or could be relational database or non-relational database being used in a particular way (e.g. an events table)). The following sequence of events for cart 101 might be written to the event store:

AddItem { “banana”, $1.00 }AddItem { “apple”, $1.50 }AddItem { “banana”, $1.00 }RemoveItem { “apple” }DiscountApplied { Requirement { 2, “banana” }, -$0.10 }AddItem { “mango”, $2.00 }CheckedOut { Items { Item { “banana”, 2, $1.00 }, Item { “mango”, 1, $2.00 } }, Discount {“banana”, 2, -$0.10} }

That last event (along with the fact that it’s for cart 101, which is metadata associated with the event) can be used to derive an event for publishing. One key thing to note is that there’s nothing else being written to the database but these events.

Benefits of using Event Sourcing based Event Driven Systems

Since, event sourcing mandates keeping a persistence journal log of the events, which forms the single source of truth to derive the application state, this in-turn provides a number of facilities that can be built on top of the event log:

Complete Rebuild: The application state can be discarded completely and rebuilt by re-running the events from the event log on that empty application state.Temporal Query: The application state can be determined at any point in time by starting with blank application state and then, rerunning the events up to a particular time or event.Event Replay: If any past event was incorrect, the consequences can be computed by reversing it and the later events and finally, replaying the new correct event and the later events. This can also, be achieved by blank application state and replaying the events after fixing them and their order.System Debug/Audit : The append-only storage of events in the event sourced journal provides an audit trail that can be used to monitor actions taken against a data store, regenerate the current state as materialized views or projections by replaying the events at any time, and assist in testing and debugging the system.Using Event Stores for Event Sourcing based Event Driven Systems

The Event Sourcing pattern captures the application state transition or the data mutations by storing the sequence of events (each representing that data or state mutation), each of which is recorded in an immutable append-only store. That immutable append-only store acts as a single source of truth about the operations on the application state and data as well as, the event store typically publishes these events so that consumers can be notified and can handle them, if needed. Consumers could, for example, initiate tasks that apply the operations in the events to other systems, or perform any other associated action that’s required to complete the operation, thus decoupling the application from the subscribed systems.

An Event Sourced persistence will model the entities as Event Streams and keep an immutable journal of these event streams. When the entity state or attribute mutates, a new event is produced and saved. When the entity state needs to be restored, all the events for that entity are read and each event is applied to change the state, reaching the correct final state of the entity. Note that, State here, is the pure function application of the event stream on the entity.

Here is how a sample EntityStoreAdaptor on top of Event Store persistence for Event Sourcing will look like:

public class EntityStoreAdapter { EventDatabase db;
Serializer serializer; //Command Applier Eager
public void Save < T > (T entity) where T: Entity { var changes = entity.changes; if (changes.IsEmpty()) return; // nothing to do var dbEvents = new List < DbEvent > (); foreach(var event in changes) {
var serializedEvent = serializer.Serialize(event);
dbEvents.Add(
data: new DbEvent(serializedEvent),
type: entity.GetTypeName();
);
} var streamName = EntityStreamName.For(entity);
db.AppendEvents(streamName, dbEvents); } //Query Resolver Lazy
public T Load < T > (string id) where T: Entity { var streamName = EntityStreamName.For(entity);
var dbEvents = db.ReadEvents(streamName); if (dbEvents.IsEmpty()) return default (T); // no events var entity = new T();
foreach(var event in dbEvents) {
entity.When(event);
} return entity;
}
}Materializing Application State for Event Sourcing based Event Driven Systems

In end-user facing applications, the current state of the application needs to be derived on-demand and that is derived from the materialization of the events as actions on top of the entity states. This can also, be done using a scheduled job so that the state of the entity can be stored for querying and presentation.

For example, a system can maintain a materialized view of all customer orders that’s used to populate parts of the UI, say aggregated view. As the application adds new orders, adds or removes items on the order, and adds shipping information, the events that describe these changes can be handled and used to update the materialized view.

Event sourcing is commonly combined with the CQRS pattern, an architectural style that separates reads from writes. In CQRS architecture, data from write database streams to a read database and gets materialized on which queries run. Since, Read/Write layers are separate, the system remains eventually consistent but scales to heavy reads and writes.

Caveats to take care in Event Sourcing based Event Driven SystemsEventual Consistency — The reads will not be based on the most recent writes of the events as there will be delay in between creating new materialized views or the projections of the data by event replaying. During the delay between publishing of the events, creating the materialized view and notifying the consumers, new events will have been written to the event journal.Event Log Immutability — The event log acts as a single source of truth and needs to be immutable and thus, event data should never be updated. In order to update an entity to undo a change is to add a corresponding event to fix it. Even the event schema change needs to done on all the events in the event journal store.Event Ordering and Linearity — As multiple applications or publishers will be creating and publishing events to the event store, the event ordering and linearity becomes important to maintain the data consistency. Adding a timestamp to every event or annotating each event resulting from a request with an incremental identifier resolves the issue of ordering conflict.Consumer Idempotency — Event publication might be at least once (as keeping it exactly once will be difficult), and so consumers of the events must be idempotent. They must not reapply the update described in an event if the event is handled more than once so avoid unnecessary side-effects on the entity state or data attribute computation.Snapshotting and Materialising — The events needs to be snapshotted and materialised on regular intervals, specially if the size of the event stream is large to handle on-demand queries on the model state and its attribute data.Difference between Event Sourcing and Change Data Capture

Change data events use the underlying database transaction log as the source of truth. The event is based on the database that the transaction log belongs to, rather than the original application and the event is available for as long as the events are persisted (not immutable) based on a mutable database, which means tighter coupling to the database data model. The CDC captures the effect of the event in terms of create, update or delete, for example, the button was update to off state.

The Realization

Event Sourcing can be realized end-to-end as let’s say on AWS :

Kinesis Data Streams serverless data streaming serviceKinesis Firehose ETL service to deliver streaming data to data warehouseS3 Highly available and scalable object storeRedshift Cloud data warehouseAPI Gateway Cloud API managementLambda Cloud native functions for compute servicesEpilogue

So, you have now understood the event driven systems architecture and dived deep into event sourcing to implement such system. However, the use-cases of event sourcing systems needs to taken into consideration before going ahead with the implementation of such complex system for event driven architecture. The event sourcing should be used, when intent, purpose, or reason in the data needs to be captured as recorded events that occur, and be able to replay them to restore the state of a system, roll back changes, or keep a history and audit log, i.e. events emerge as natural first-class functional feature of the system and the system can accept eventual consistency for the data entity models as its non-functional side-effect.

Solve the problem once, use the solution everywhere!
[image error]

Architecting Event driven Systems the right way was originally published in Technopreneurial Treatises on Medium, where people are continuing the conversation by highlighting and responding to this story.

 •  0 comments  •  flag
Share on Twitter
Published on August 14, 2022 19:33

May 27, 2022

The Panchatantra to be an Excellent Engineering Leader

The five aspects of Engineering LeadershipPrologue

Panchatantra is Sanskrit for “Five Treatises” or “Five Chapters”, first used for ancient narrated collection of Indian animal fables, which provided food for thought and imparted learning and deep wisdom.

Being an Engineering Manager is easy as pie but being a Great Engineering Leader is really tough nut to crack. Engineering management though, moves you up the career ladder but it also, makes you 100x more accountable and responsible, instead of just being an individual contributor anymore. However, a lot of engineering managers, today go verbatim with their job description or their own experience of their past managers on what to do.


The most dangerous leadership myth is that leaders are born-that there is a genetic factor to leadership. That’s nonsense; in fact, the opposite is true. Leaders are made rather than born. Warren Bennis

Without right understanding on how to lead with ethics and sincerity, most of the organizations today end up having bad apples in management leading to bad top-down culture and thus, really bad experience for the reporting engineers. The engineering managers either try to be strongly authoritative and micro-managing or sometimes on the other hand, even lazily disconnected and timid.


The challenge of leadership is to be strong, but not rude; be kind, but not weak; be bold, but not bully; be thoughtful, but not lazy; be humble, but not timid; be proud, but not arrogant; have humor, but without folly. Jim Rohn

Also, read my article on the right approach to software engineering.

This article is the engineering leader playbook to collate all the expectations, an engineering manager has to wear and how they can evolve themselves to be a Great Engineering Leader, following integrity and high standards while wearing multiple hats to lead the team, grow the delegates, plan with vision, solve problems and finally, deliver the results.


The supreme quality of leadership is integrity. Dwight Eisenhower
Leadership Panchatantra

A very disturbing feature of overconfidence is that it often appears to be poorly associated with knowledge — that is, the more ignorant the individual, the more confident he or she might be. Robert Trivers

The five treatises that an engineering manager is expected to follow and manage effectively and efficiently falls into these following buckets in order of their priority, starting with 3 Ps — People, Process and Product, followed by Stakeholder and Tech. Let’s dive deep into each of these treatises with curiosity, not over-confidence and learn how to wear those hats in the right way and style to be a really good leader.


Leadership and learning are indispensable to each other. John F. Kennedy
Panchatantra I : People Management

The growth and development of people is the highest calling of leadership. Harvey Firestone

People management is the foremost responsibility of leadership as the leader, not being an individual contributor anymore is actually accountable for the growth and nurturing the individual skills and talent of the team.

People Management is the ability to lead, grow and inspire the cavalry as the cavalry captain.

The following are three major dimensions to people management.

1. Lead the team


It is better to lead from behind and to put others in front, especially when you celebrate victory when nice things occur. You take the front line when there is danger. Then people will appreciate your leadership. Nelson Mandela

2. Grow the team


Before you are a leader, success is all about growing yourself. When you become a leader, success is all about growing others. Jack Welch

3. Inspire the team


If your actions inspire others to dream more, learn more, do more and become more, you are a leader. John Quincy Adams

Ending this most important treatise on this small word of caution,


You don’t lead by hitting people over the head — that’s assault, not leadership. Dwight Eisenhower
Panchatantra II : Process Management

One of the tests of leadership is the ability to recognize a problem before it becomes an emergency. Arnold Glasow

Large scale engineering projects consists of multiple on-going operations with involvement of multiple teams and folks towards a common goal for advancement of the organization’s future, and to enhance the current capabilities. If such large projects are not managed in a co-ordinated way both at intra-team level with cross-dependancy management and inter-team level with agile process management, it may soon turn into chaos, leading to frustrations and failures of the involved teams.

Process Management is the ability to control the chaos in the war within your cavalry or with other troops as the cavalry captain.

Resolving the problems using dependancy analysis, even topologically sorting the dependancy graph to find the order of items to pick with an execution plan and even, escalating to the folks to control the chaos is part and parcel of this management task.

The following are three major dimensions to process or program management.

1. Formalize the procedures and processes


Discipline is the bridge between goals and accomplishment. Jim Rohn

2. Work towards a common organizational goal


Leadership — mobilization toward a common goal. Garry Wills

3. Have a comprehensive visibility into the plan

I think our work as movement leaders isn’t just about our own visibility but rather how do we make the whole visible. How do we not just fight for our individual selves but fight for everybody? Patrisse Cullors

Ending this treatise on this important realisation,


Excellence is a continuous process and not an accident. A. P. J. Abdul Kalam
Panchatantra III : Product Management

The very essence of leadership is that you have to have a vision. It’s got to be a vision you articulate clearly and forcefully on every occasion. You can’t blow an uncertain trumpet. Reverend Theodore Hesburgh

Leaders are trusted to cater to the every step of a product’s lifecycle — from development to positioning and pricing — by focusing on the product and its customers first and foremost. To build the best possible product, product leaders advocate for the end users and customers within the organization and make sure the voice of the market is heard and heeded.

Product Management is the ability to strategize with a war vision to win the war for your citizens as the cavalry captain.

The following are three major dimensions to product management.

1. Customer Obsession


The number one thing that has made us successful by far is obsessive compulsive focus on the customer. Jeff Bezos

2. Product Vision and Strategy


Leadership is the capacity to translate vision into reality. Warren Bennis

3. Prioritization and Execution


The role of leadership is to transform the complex situation into small pieces and prioritize them. Carlos Ghosn

Ending this treatise on this interesting product management issue of falling into trap of envisioning complex products and saying yes to all hodgepodge of features,


The art of leadership is saying no, not saying yes. It is very easy to say yes. Tony Blair
Panchatantra IV : Stakeholder Management

Lead and inspire people. Don’t try to manage and manipulate people. Inventories can be managed but people must be led. Ross Perot

A stakeholder is an individual, group or organization that is impacted by the outcome of the engineering project under your leadership. Project stakeholders, as the name implies, have an interest in the success of a project, and can be internal or external to the organization that is sponsoring the project.

In large scale engineering projects, if the projects are not provided enough visibility to its stakeholders with both progress and impediments in an organized way, the project and the team may soon lose the trust and the support of the key stakeholders, leading to disappointment and despair of the involved teams. To avoid this, engineering leader are expected to systematically identify stakeholders; analyze their needs and expectations (say, using power/interest grid); and plan, implement and monitor various tasks to engage with them for the success of your deliverable.

Stakeholder Management is the ability to update and collaborate with other troop captains, your Marshals and your Majors as the cavalry captain.

The following are three major dimensions to stakeholder management.

1. Stakeholder Identification


All of your stakeholders have to have the right seat at the table, and they all have to be successful. It’s hard to do, but you have to keep your eye on developing a meaningful relationship where it is beneficial for them. Then you work backwards from there. Brian France

2. Stakeholder Analysis and Prioritization


Find the appropriate balance of competing claims by various groups of stakeholders. All claims deserve consideration but some claims are more important than others. Warren Bennis

3. Expectation Setting and Management


When one’s expectations are reduced to zero, one really appreciates everything one does have. Stephen Hawking

Ending this treatise on this important realization of acting in the interests of the stakeholders and the customers,


A healthy corporation acts on the interests of its stakeholders and customers. Ari Melber
Panchatantra V : Technical Management

Whatever you are, be a good one. Abraham Lincoln

In a technical role, leadership involves steering from the front, and working side-by-side with employees not as micro-managers but as hands-on leader to achieve the goals of the project and the company. Instead of agonizing over the details to make sure everything is done correctly, excellent technical leaders are instead, familiar with the bigger picture of the vision as well as the deeper picture of the organization’s daily workings so that they can zoom in or zoom out at will, thus standing at the optimal distance. By occasionally showing up on the front lines, they lead by example, one of the ways to apply a hands-on technical approach to leadership.

Technical Management is the ability to ride the horse as the cavalry captain.

The following are three major dimensions to technical management.

1. Technical Proficiency and Excellence


Indeed, the woes of Software Engineering are not due to lack of tools, or proper management, but largely due to lack of sufficient technical competence. Niklaus Wirth

2. Delegation and Trust


The inability to delegate is one of the biggest problems I see with managers at all levels. Eli Broad

3. Zoom in to be Gate Opener and Zoom out to be Gate Keeper


I think that one of the things that I can do is I seem to have the ability to zoom in super tight for very small details, but then jump back for sort of that big picture perspective. And I think that ultimately, that’s one of my strengths, because you have — every detail matters. Henry Selick

Ending this treatise on this important realization of being the servant leader,


A leader is best when people barely know he exists, when his work is done, his aim fulfilled, they will say: we did it ourselves. Lao Tzu
Epilogue

In the end, it all boils down to the following happy realisation of servant leadership, which can only be learned through practice and experience,


There are three essentials to leadership: humility, clarity and courage. Fuchan Yuan

Self-awareness is the final key to be an excellent leader, knowing what you are good at and where you need to grow, and then, strategizing in using one’s strengths while delegating for one’s weaknesses. After all, Leadership is all about levelling up self and growing others, so they can learn and the organization can do things at scale.


I am not afraid of an army of lions led by a sheep; I am afraid of an army of sheep led by a lion. Alexander the Great
[image error]

The Panchatantra to be an Excellent Engineering Leader was originally published in Technopreneurial Treatises on Medium, where people are continuing the conversation by highlighting and responding to this story.

 •  0 comments  •  flag
Share on Twitter
Published on May 27, 2022 21:33