How to Secure Your Node.js Containers on Kubernetes With Best Practices

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Learn security best practices for Kubernetes and especially for securing applications built with NodeJS running on Kubernetes. We will talk about securing the cluster, your Node.js containers, and more. We will also look at how to use OIDC to secure access to the clusters.

This talk has been presented at DevOps.js Conf 2022, check out the latest edition of this JavaScript Conference.

FAQ

Role-Based Access Control (RBAC) is a widely used security mechanism in Kubernetes that allows defining different permissions based on user roles within an organization. It helps in implementing security policies that closely match an organization's structure and is most effective in medium to large organizations.

OpenID Connect (OIDC) is a secure and scalable authentication protocol that provides a single sign-on solution for Kubernetes cluster access. It simplifies onboarding and off-boarding processes by allowing user management through the OIDC provider, eliminating the need to manage sensitive data like passwords directly in the cluster.

Secrets in Kubernetes are used to manage and store sensitive information such as passwords, tokens, and keys securely. They can be mounted as data volumes or exposed as environment variables within containers, ensuring that sensitive data is handled securely and is not exposed in plaintext.

Regularly updating Kubernetes helps in addressing bugs, security vulnerabilities, and ensuring compatibility with the latest features. Staying current with updates is crucial to maintaining the security and efficiency of the cluster, especially to protect against known vulnerabilities and exploits.

Isolating workloads into different namespaces aids in managing permissions and access control more effectively. It allows for finer-grained security policies and limits the potential impact of security breaches, as compromised resources in one namespace won't affect others.

Using minimal and up-to-date base images reduces the attack surface by eliminating unnecessary packages and vulnerabilities. This practice also ensures that containers are lightweight and only contain essential functionalities, which enhances both security and performance.

Monitoring and auditing provide visibility into the activities and health of the Kubernetes cluster. They help in detecting abnormal behaviors or potential security breaches early, allowing for quick mitigation actions and ensuring compliance with security policies.

Deepu K Sasidharan
Deepu K Sasidharan
34 min
24 Mar, 2022

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Today's talk is about securing Kubernetes containers, especially for Node.js. The best practices for securing Kubernetes include using RBAC, OIDC, and secrets, as well as isolating workloads and securing container images. OADC is recommended for authentication in Kubernetes, and securing the Kubernetes cluster is crucial. Cloud-based Kubernetes clusters can utilize OADC or the default authentication mechanism provided by the cloud provider. Managing team size and dealing with different security philosophies are important considerations. Overall, securing Kubernetes is essential for protecting the infrastructure and data.

1. Introduction to Kubernetes Security

Short description:

Today's talk is about securing Kubernetes containers, especially for Node.js. Regardless of how you run your Kubernetes clusters, you need to ensure their security. Introductions: I'm Deepu K. Sashidharan, co-lead of jHipster, creator of kdash, and a developer advocate at Okta. Follow me on Twitter and check out my blog and book about jHipster.

Hello everyone. Welcome to my talk. Today I'm going to talk about securing your Kubernetes containers, especially for Node.js. If you're a DevOps engineer, there's a good chance that you're maintaining either an on-prem Kubernetes cluster or a PaaS like EKS, AKS or GKE. But regardless of how you run your Kubernetes clusters, you need to make sure that they are secure.

But first, introductions. My name is Deepu K. Sashidharan. I'm the co-lead of jHipster. I also created a nifty dashboard called kdash for Kubernetes. I'm an open-source aficionado, a Polyglot developer and a Java champion. I work as a developer advocate at Okta with a focus on DevOps. I also write frequently about languages and tech on my blog. You can find it on deepu.tech. Please do follow me on Twitter if you are interested in my content. I have written a book about jHipster. If you like this talk, you might like the book as well. So please do check it out.

2. Understanding Kubernetes Security

Short description:

Before we talk about securing Kubernetes or before we talk about security best practices in Kubernetes, it is important for us to have a basic understanding of Kubernetes security. Like any other complex piece of software, security in Kubernetes is multifold. TLS is used to ensure transport security and authentication and authorization can be done using multiple mechanisms in Kubernetes. Kubernetes comes with many security options out of the box, as we saw. But to bulletproof your infrastructure, you need to consider many more security best practices.

Before we talk about securing Kubernetes or before we talk about security best practices in Kubernetes, it is important for us to have a basic understanding of Kubernetes security. Like any other complex piece of software, security in Kubernetes is multifold. It can be broadly categorized into four layers. The transport security, authentication, authorization, and admission control.

TLS is used to ensure transport security and authentication and authorization can be done using multiple mechanisms in Kubernetes. There is also a possibility of adding custom admission control modules to add further policies and security in Kubernetes. So these are the things that are available out of the box in Kubernetes.

Kubernetes comes with many security options out of the box, as we saw. But to bulletproof your infrastructure, you need to consider many more security best practices. Today, we'll look into some of the vital security best practices. You can also find a similar blog for me in the link provided in this slide. So please do check that out if you want to read a bit more info about these.

3. RBAC and Authorization in Kubernetes

Short description:

The first best practice for securing Kubernetes is to use RBAC. RBAC allows you to define role-based access control that closely resembles your organization's business roles. It provides flexible control over access and can be modeled based on your organization's structure. Most Kubernetes distributions have RBAC enabled by default. You can create cluster roles and role bindings to control access to resources.

So, first, the first best practice would be using RBAC. Kubernetes supports multiple authorization models for its API server. There is attribution-based access control, or ABAC, role-based access control, or RBAC, node authorization, and the webhook mode. Out of all these, RBAC is the most secure and most widely used, and is ideal for enterprises and medium to large organizations.

With RBAC you can define role-based access control that closely resembles your organization's business roles. RBAC also works great with OIDC authentication. So, with RBAC, you can have much more flexible control of who has access to what, and you can model it based on how your organization is structured, based on your departments and all these things. Most Kubernetes distributions have RBAC enabled by default. You can check this by running the command kubectl clusterinfo down, then look for the authorization mode flag in the output, which will show something like authorization mode equal to RBAC. If not, you can enable it using the authorization mode flag for the API server. It would be hyphen-authorization mode. You can do this when creating a cluster or you can do this by patching a cluster. For example, if you set hyphen-authorization-mode equal to RBAC, command-node, it will enable both RBAC and node authorization on the cluster. You can have multiple authorization also on the same cluster. Once RBAC is enabled, you can create cluster roles and roll bindings and cluster roll bindings to control access to your resources. Here is a small example of a role and role binding that lets users view ports and services. I would recommend you to check out Kubernetes documentation on RBAC and cluster role and role bindings for further learning, as you will see a lot more explanations, a lot more examples on how you can use this flexibly to manage complex authorization requirements.

4. Securing with OIDC and Using Secrets

Short description:

The next best practice is to use OpenID Connect or OIDC for authentication in Kubernetes. OIDC is the most secure and scalable solution, providing single sign-on and easy user management. It eliminates the need to store sensitive information on user machines and allows for additional security features like multi-factor authentication. Check out the blog post on securing clusters with OIDC for more details. Another best practice is using secrets to store sensitive data in Kubernetes.

The next best practice would be to use OpenID Connect or OIDC to secure your cluster. This is for the authentication part. RBAC was for the authorization part and OIDC would be for the authentication part. Kubernetes supports multiple authentication mechanisms. Some of the most common are client certificates, basic authentication tokens which includes service account tokens, bearer tokens, and so on. OpenID Connect or OIDC and also proxy-based authentication. These are all the authentication mechanisms that is supported by Kubernetes out of the box.

Out of all these authentication mechanisms, OIDC is the most secure and scalable solution. It is ideal for clusters accessed by large teams as it provides a single sign-on solution for all the users and makes it easy to onboard and off-board users because once you secure your cluster using OIDC, you wouldn't have to go into the Kubernetes clusters to do any user management. You can do everything from your OpenID Connect provider. You can do user onboarding, off-boarding, everything from there. You don't have to make sure that you don't have to worry about having to clear usernames, passwords or tokens and certificates from user machines and so on. It'll all be taken care of by the OIDC provider, the OIDC mechanism. So it is also way more secure than other mechanisms as you don't have to store any sensitive information on a user's computer like client secrets or passwords with the traditional mechanisms that you would have to do. You can also use features like multi-factor authentication, tokens, Yubi keys, etc., as supported by your OIDC provider, of course, but all these mechanisms would be available to access your Kubernetes cluster as well. So that means you can add additional layers of security which is not possible with any other authentication mechanism supported by Kubernetes. You can also check this blog post I wrote about securing your clusters with OIDC for more details and to see how you can actually do this for your cluster. The blog shows setting this up using Okta, but the steps are similar for any OIDC provider. So you can easily switch out from Okta to Key Clock or whatever OIDC provider you prefer. So do check this blog out to see how you can actually do this.

Here is how OIDC works with Kubernetes. When you run a command with kubectl, it uses kubelogin and opens a browser to authenticate you with the OIDC provider. It then uses the auth response obtained from the OIDC provider to fetch tokens from the auth server and passes the tokens to the Kubernetes API server, which will be used to authenticate the user.

The next best practice would be using secrets. Of course, this one should be a no-brainer. Kubernetes has a secret resource that can be used to store sensitive data. This is a great way to store passwords, keys and other sensitive data. Secrets can be used for storing string data, Docker config, certificates, tokens, files, and so on. Secrets can be mounted as data volumes or exposed as environment variables to be used in containers. Secrets can be plain-text or encoded.

5. Securing Kubernetes Best Practices

Short description:

Don't use plain-text for sensitive secrets. Implement RBAC for secrets. Keep Kubernetes up to date. Restrict admin access. Control traffic between pods and clusters. Isolate workloads by namespace.

So please don't be that person who uses plain-text for sensitive secrets. Secrets are flexible and native to Kubernetes. So there is no reason for you to not use them. Also, make sure to implement proper RBAC for secrets as well, so that everyone in your organization does not have access to secrets. Because I don't think you would have to expose secrets to everyone who has access to the cluster.

So next would be keeping Kubernetes up to date. Like any other software, Kubernetes also has bugs and issues and CVEs. And from time to time, there might be a high-severity bug that calls for a CVE. Like, for example, memory safety-related issues. Hence, it is an excellent idea to keep the Kubernetes version up to date for the server and the CLI client. You can check the Kubernetes security and disclosure information website to see if there are known security bugs for your Kubernetes version. If you're using a managed pass, it should be pretty easy to upgrade. And for on-prem installations, there are tools like Kops, Kubadmin, and so on. These tools, it makes it easy to upgrade your on-prem clusters.

The next would be restricting admin access. So Kubelet is the primary node agent running on each node. And by default, Kubelet's HTTP endpoints are not secure. This could allow unintended access and hence should be restricted. And furthermore, when someone has access to a Kubernetes cluster, they can access Kubernetes API server and also can SSH into the cluster nodes themselves. This is also not very safe. To limit node access, cluster access should be limited as much as possible. So disable SSH access for non-admin users. Secure your API server using OIDC and RBAC, as we saw earlier, so that only authenticated users with sufficient roles can access the API.

So next would be controlling traffic between pods and clusters. So generally, pods within the same cluster will be able to communicate with each other. And if you have multiple clusters in the same network, there may be traffic between these clusters as well. So do not leave all this open, as it would be increasing the attack surface. And it could lead to a compromised cluster when another in the network is affected. So use Kubernetes network policies to control traffic between pods and clusters and allow only the least necessary traffic, so that even if one cluster or a container is compromised, you don't end up compromising the entire cluster or your network. Next would be to, this is also a bit similar, so next would be to isolate workloads by namespace.

6. Securing Workloads and Containers

Short description:

Isolate workloads in different namespaces. Set resource limits at the namespace level. Monitor and audit your clusters. Follow infrastructure best practices. Secure containers by running them as non-root users.

So do not run all your workloads in a single namespace. Isolating workloads in different namespaces based on business needs is more secure and it is also easier to manage with RBAC. So this way you can fine tune RBAC even further to let users access only what they need to see. You can also use Kubernetes network policies to isolate traffic between namespaces if applicable and if required. So isolating your workloads with different namespaces is always a good idea.

As we're securing APIs and the cluster itself, it is also essential to set resource limits on how much CPU, memory and persistent disk space is used by the namespaces and resources. This secures your cluster from denial of service attacks. When a particular container uses up all the resources, it could end up bubbling up and it could end up compromising your entire cluster. So in order to avoid that, use resource quotas and limit ranges to set limits at the namespace level. And you can also use requests and limits at the container level to set some limits as well.

Finally, it is also extremely important to monitor and audit your clusters. So enable audit logging for the cluster and use monitoring tools to keep an eye on the network traffic to, from and within the cluster. So monitoring can be done using open source tools like Prometheus, Grafana or with proprietary tools. So it is very important that you monitor this traffic so that you can prevent breaches before they happen. You can set up alarms and stuff if the tool supports in order to proactively prevent any security breaches. Further more, keep these infrastructure best practices also in mind when securing a Kubernetes cluster. Ensure that all communication is done via TLS. Protect etcd with TLS firewall and encryption and restrict access to it using strong credentials. Setup IAM policies, IAM access policies in a supported environment like, you know, AWS, GKE or Azure. Secure the Kubernetes control plane, don't let it open. Rotate your infrastructure credentials frequently. And if you're using a cloud or PaaS, restrict the cloud-metadata API access because PaaS like AWS, Azure, GCP provide these metadata APIs which could have some information which you don't want to be public. So securing the containers are as important as securing the cluster. So far, we saw what can be done at the cluster level to secure your Kubernetes settings, but it is also very important that we follow best practices at the container levels. So, let's see what they are. So, do not run your containers as root user, as this would give the container unlimited access to the host. In case of a compromised container, this would give the attacker root access and will grant wider attack surface. Always run the containers using a non-root user to limit access and use the least privileged user as much as possible. I mean, use these privileges as much as possible. For Node.js containers specifically, the official base images have a least privileged user called node.

7. Securing Container Images

Short description:

Use non-root user. Use minimal and up-to-date official base images. Remove unwanted dependencies, packages, and debugging tools. Use official verified images and trusted registries. Verify image publisher.

So, we can use that instead of root to be more secure. Next would be to use minimal and up-to-date official base images. So, remove all unwanted dependencies, packages, and debugging tools from the image as it will make it more secure, it will reduce the attack surface and it will also make the image lightweight. And make sure you use official verified images for popular software and prefer LTS versions when possible as they would have longer security patches and updates. And finally, use a trusted registry for non-official images or for non-official images, maybe even consider building the images from source on your own and hosting it on your private registries. And if you're using third-party registries, always verify the image publisher and make sure it is from a reputed publisher. Don't go about trusting every registry or every publisher you come across.

8. Securing Containers and Images

Short description:

Prevent loading unwanted kernel modules in containers. Enable container image scanning in your CACD phase. Use Docker Bench for security to audit your custom images.

Next would be to prevent loading unwanted kernel modules in containers, especially during development. It might be might end up, you know, using more from the underlying Linux kernels for debugging purpose or testing purposes. But for production, make sure you do not load any unwanted kernel modules. So these can be restricted using rules in ETCModProb or by ETCModProb. of the node or by uninstalling the unwanted modules from the node itself. So this reduces the attack surfaces and also it reduces the resource usage of that particular container.

Next would be to enable container image scanning in your CACD phase. So this would help you to catch known vulnerabilities before they become an attack vector. You can use open source tools like Clio or Anchor for this or commercial computers such as Deluxe or Acid. Or commercial tools like Snyk for this. These are quite easy to set up and they support most CACD solutions out of the box. Another best practice for containers would be to use Docker Bench for security to audit your custom images, custom container images for security best practices. So if you are building container images for your application, it's a good idea to run it through Docker Bench for security to ensure that you're following all security best practices and also to discover additional security best practices that you can do for your particular container. It's a great tool to ensure that custom images you build for production follow all the best practices possible.

9. Securing Node.js Containers in Kubernetes

Short description:

Use Pod Security Admission to limit container's access to the host. Install only production dependencies. Set Node into production mode. Safely terminate applications using an init system like dump init. Use a Docker Ignore file to ignore sensitive files. Reach out to me via Twitter for more content.

Another best practice for containers would be to use Pod Security Admission. So this is the successor to Pod Security Policies. So from Kubernetes version 1.21 onwards, the Pod Security Policies are duplicated and the successor is Pod Security Admission, so you can use this module to limit the container's access to the host. This will help to reduce the attack surface and it will also help to prevent privilege escalation in case of a compromised container.

And since we are talking about securing Kubernetes clusters on a Javascript ecosystem, there are a few things which is specific to Node.js that you have to keep in mind when securing your Node.js containers running on Kubernetes. So only install production dependencies. So in your images, when you are installing a production dependency, in your images, when you are installing dependencies, make sure you pass the hyphen if and only production flag so that you don't end up installing your dev dependencies or other non-production dependencies as this would reduce the attack surface. Optimize for production by setting Node into production. As Node.js applications do have some optimizations when they are run in the production mode and they are much more secure. Safely terminate applications using an init system like dump init. Mostly, we end up just running Node server.js or Node file name.js, but that is not ideal. As in case of application crashes, things are not cleaned up, application is not terminated properly, so use a system like dump init so that all the signals from your host is passed on properly to the application itself. And finally, use a Docker Ignore file to ignore sensitive files like .n, .npmrc and so on, because these files will have sensitive information like passwords or tokens, and we have no reason to expose that to the container, which could also end up in public domains if they are public containers.

So that's it, folks. I hope the talk was worth your time and thank you for attending. You can reach out to me via Twitter and do check out my website for more content. Thank you.

10. Authentication Methods in Kubernetes

Short description:

Tokens are the most popular authentication mechanism for Kubernetes clusters, with 63% of users using them. OIDC is also growing in popularity, providing a more secure option. Basic auth, using username and password, is no longer widely used. Tokens are sufficient for smaller teams, but for larger organizations and those implementing RBAC, OIDC is recommended.

So the question was, what kind of authentication do you use for your Kubernetes clusters? Well, we have a big winner, and I think this is not a surprise with 63% tokens. Yeah, definitely. Were you expecting such a landslide? Yeah, this is something I was expecting. Because that's the default that most people would do. I'm quite surprised by the number that OIDC has. Do we know how many holds are there overall? I can't see that, but, no, we just see percentages, yeah. Okay. Because I wasn't expecting OIDC. No, it's good. It would be the most secure way, of course. So it's good. It's growing. But, yeah, it's... Sorry? It's growing, yeah. From 16 to 19. Yeah. So, yeah, definitely not a surprise that people use tokens as the, you know, default authentication mechanism, because in most of the set up, it's also the default, so it makes sense. So I don't think a lot of people go about changing these unless they have specific requirements and depending on their team and stuff, but, yeah, that's interesting, definitely.

Oh, it's growing. Growing again. Oh, super fast. And even 0% for basic auth. So I'm assuming that would just be username and password. Yeah, yeah, definitely. That's a good trend, definitely. But, yeah. I mean, tokens is not that bad. But, yeah, I mean, for larger organizations, I mean, for smaller teams, I think tokens are perfectly fine. You don't need OADC for everyone. But if you are in a larger team where there is a lot of churn, and especially if you're trying to do R-back, which has to kind of reflect your organization's composition, and whatever, then I think OADC is the best choice.

11. OADC as Secure and Flexible Authentication

Short description:

OADC is the most secure and flexible authentication mechanism for Kubernetes. It makes onboarding easier by allowing easy addition and removal of users without touching the Kubernetes setup. OADC and RBAC together provide granular access and authorization control, making it secure and flexible for larger organizations. Thank you.

Otherwise, it's too much of a hassle to manage tokens all time and stuff like that. Yeah. I think we should just stay talking for like five to 10 minutes, and then OADC will be the winner, as it's growing and growing. Probably. Yeah. Right there.

Let's jump into the real audience questions. The first one, what do you say that OADC is the most secure and flexible authentication mechanism for Kubernetes? How does it make onboarding easier? Yeah, that's something that I was mentioning. I think the nicest aspect of OADC, of course, the security and all that apart is the flexibility, because once you have configured your cluster for OADC authentication, and once you have established that, then adding new users and removing them becomes so easy, because you're not going to save anything on a user's computer, because with the certificate-based authentication, you still need to have those tokens and those certificates in your laptop. You have to do that authentication, you have to keep it. And if you want to remove someone from the cluster access, you would have to make sure that they have removed it, or you'd have to change your credentials and stuff. So, that becomes very annoying. I used to work in a company where we were doing a lot of DevOps, we were not a huge team. We were like 6-7 people, but still it was annoying whenever there is a new member to the team or when someone is leaving, you have to ensure that all these are cleared up and stuff like that. So, OADC makes that extremely flexible because you don't have to touch your Kubernetes setup at all. You can do everything from your OADC user management, you can add new users, you can remove them. It just translates to your access. So, if someone is removed and then they are not going to have access anymore, because you can do it from your authentication provider or your OADC app itself. So, that makes it extremely flexible, especially for larger organizations, where there is going to be team changes or if you are going to have churn. And also it makes RBAC easier. Because with OADC and RBAC together becomes so flexible that you can manage access and authorization especially in a very granular level. You can reflect that based on how your company might be set up. So, it makes it very secure and extremely flexible. Yeah, all right. Thank you.

Question from CC Miller. Oh, wait, sorry, that was not a question. That was a note on the poll. But he hasn't asked a question. If you're working with someone who says, why bother, we have security on the app.

12. Importance of Securing Kubernetes Cluster

Short description:

Securing your Kubernetes cluster is more important than the security of your app because it is the key to the kingdom. You can access everything. You can access the database from there depending on your setup. You can do so much if you have access to your Kubernetes cluster.

What's your answer to that? Sorry, can you repeat the question again? If you are working with someone who says, why bother, we have security on the app, then what's your answer to that? That's an entirely different thing. Security on the app is authentication or whatever for your applications, but if your cluster is not secure, if anyone can access your cluster, then you are in a very bad situation because it doesn't matter if your app has security or not. If I can access your cluster, if I can SSH into your cluster, I can do anything. It doesn't matter if your app is secured or not. You have to secure your cluster first. A Kubernetes cluster is equivalent of your infrastructure. You wouldn't have your infrastructure unsecured because if your app is secure. I would say securing your Kubernetes cluster is more important than the security of your app because it is the key to the kingdom. You can access everything. You can access the database from there depending on your setup. You can do so much if you have access to your Kubernetes cluster. I hope for CC Miller that you don't actually have someone on the team that doesn't care about security. Yes. I would be very happy if someone says that. Let's pray that it's not a real-life scenario.

13. Cloud-based Kubernetes Security Mechanisms

Short description:

For cloud-based Kubernetes clusters like AWS, AKS, or GKE, you can use OADC for security. Providers like Google offer their own OADC or you can use third-party vendors. Additional authentication options are available, such as Google's custom authentication mechanism. OADC is recommended for flexibility, but smaller teams can use the default authentication mechanism provided by the cloud provider. Avoid using basic auth.

The next question is from Ravi. What about the security mechanisms for cloud-based Kubernetes clusters? For cloud-based, for example, if you're talking about AWS or AKS or GKE, basically you can do OADC with any of them. They all even make it easier to set up OADC providers. Some of them provide their own. For example, with Google, you can even use Google OADC or you can use a third-party vendor like Octa or even Key Clause or whatever. Whatever you prefer. They provide mechanism to integrate and make it easier. Some of them also provide additional ways of authentication because Kubernetes also lets you, that's where the other thing comes in. You can actually add other ways of authentication like for GCP, for example in GCP, they add their own authentication mechanism when you set up your clusters. By default, it's kind of a token mechanism, but they make it kind of very... I won't say it is transparent but easy to set up kind of thing. You don't have to manually set all these up. You can log in via your Google console, but it translates into kind of a token authentication, but it's a custom authentication. If you look at the way it works, it's kind of custom. It's not the default token authentication, but a custom authentication, they have built. With all these providers, you can also do OADC. I would still say that OADC is the best authentication mechanism with even cloud providers, if you want flexibility, but if you are a small team, one or two person, then probably it doesn't matter. You can go with the default authentication mechanism provided by the cloud provider. Please don't go with basic auth. Anything other than basic auth is still fine, the smaller team. All these mechanisms should still work the same in cloud providers.

14. Team Size and Churn Management

Short description:

The turning point in team size depends on factors such as the stability of the team, the frequency of changes, and the size of the company. For stable teams with no or minimal changes, team size may not be a significant issue. However, for teams with constant changes or larger enterprises, managing team size becomes more important. In startups or small teams, manual management of team churn may still be feasible. The decision ultimately depends on the trade-off between manual effort and setting up automated processes.

What would you say is a turning point in team size, or does it depend on how many changes you have in your team? Yes, it really depends. If you have a stable team of 10 people who are never going to change for a while, then probably not a big deal, But if there are going to be constant changes, then I think it's a good idea also depends on the size of your company. If you're in enterprises, then it really makes sense to go this way, because it really fits the way enterprises operate. But if you're in a startup or small teams, then probably not that much of an issue because you can still manage churn manually, because I hope it doesn't happen like every day or something. But I was in a small team where it was a bit annoying for us, but not a deal breaker, but still it would be annoying. So it really depends. It always depends. I guess it depends on how much time you spend doing stuff manual versus setting it up, right? Yeah, exactly. Because it's like click and install, and it's gone.

QnA

RBAC and Other Authentication Modes

Short description:

RBAC can be used with other authentication modes like node authentication or webhook, providing more flexibility and security. CC Miller commented on their previous question, highlighting the importance of having team members who prioritize security. Dealing with different security philosophies is part of working together. Jessica had a question about the setbacks of an open source implementation of Sansibar, but the speaker is not familiar with Sansibar and couldn't provide an answer.

Next question, can RBAC be used along with other authentication modes? Yeah, definitely. RBAC can be used with other modes. For example, it's very common to see RBAC being used with node authentication. So ABAC is not exactly duplicated, but it's kind of being not phased out, but the Kubernetes team prefers RBAC, and they are not putting in any resources to further develop ABAC. So it's kind of in a pseudo duplicated mode, I would say. But the other mechanisms like node authentication or webhook, they can be used with RBAC, and sometimes combinations can make your authorization even more flexible or more secure depending on your needs. But yeah, they can be used together.

All right. Thank you. So CC Miller commented on his previous question. I did a long time ago, and unsurprisingly they are not a member of the team anymore. So that's probably on our comment that we hope you don't have a team member that doesn't care about security. Yeah, well, good for you CC Miller. Happy to hear that. Yeah, I can imagine that it is a struggle if you have such different philosophies on security to work with someone on a team. Also an interesting story to see how that worked out or, well, maybe it didn't work out. But yeah, that's also part of working together, right? Dealing with people that have different philosophies. Which is interesting, let's be honest.

So there's no question at the moment, but we have Jessica typing something. Not anymore. She made up her mind. She didn't want to ask a question. Okay. But that is a nice bridge to the next part because Dipu is going to be going to his speaker room on Spatial Chat now. So if you want to... Oh, Jessica's question came in. Are there any setbacks to going for an open source implementation of Sansibar? Sorry? Of what? Of Sansibar. What? What's Sansibar? I don't know. I know it's like an island, I think, but I don't think she's talking about an island. I don't really understand the question.

Conclusion and Q&A

Short description:

Jessica will answer the question about Sansibar in the space chat. Thank you, Deepu, for joining us and to everyone who participated and asked questions. I hope you found the talk useful.

Well, Jessica, then I want to invite you to answer that so that we can still go over this question. She is typing. So let's give her a minute to type. Google Sansibar. Oh yeah. And she wants to discuss it on the space chat. So let's do that because we don't have a lot of time left.

So, Deepu, thanks a lot for joining us. It's been great fun having you here. Yeah, thank you for having me. And thanks to everyone who joined, and thanks for all the questions. I hope the talk was useful and you could take something from this.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Routing in React 18 and Beyond
React Summit 2022React Summit 2022
20 min
Routing in React 18 and Beyond
Top Content
Routing in React 18 brings a native app-like user experience and allows applications to transition between different environments. React Router and Next.js have different approaches to routing, with React Router using component-based routing and Next.js using file system-based routing. React server components provide the primitives to address the disadvantages of multipage applications while maintaining the same user experience. Improving navigation and routing in React involves including loading UI, pre-rendering parts of the screen, and using server components for more performant experiences. Next.js and Remix are moving towards a converging solution by combining component-based routing with file system routing.
Levelling up Monorepos with npm Workspaces
DevOps.js Conf 2022DevOps.js Conf 2022
33 min
Levelling up Monorepos with npm Workspaces
Top Content
NPM workspaces help manage multiple nested packages within a single top-level package, improving since the release of NPM CLI 7.0. You can easily add dependencies to workspaces and handle duplications. Running scripts and orchestration in a monorepo is made easier with NPM workspaces. The npm pkg command is useful for setting and retrieving keys and values from package.json files. NPM workspaces offer benefits compared to Lerna and future plans include better workspace linking and adding missing features.
A Practical Guide for Migrating to Server Components
React Advanced 2023React Advanced 2023
28 min
A Practical Guide for Migrating to Server Components
Top Content
Watch video: A Practical Guide for Migrating to Server Components
React query version five is live and we'll be discussing the migration process to server components using Next.js and React Query. The process involves planning, preparing, and setting up server components, migrating pages, adding layouts, and moving components to the server. We'll also explore the benefits of server components such as reducing JavaScript shipping, enabling powerful caching, and leveraging the features of the app router. Additionally, we'll cover topics like handling authentication, rendering in server components, and the impact on server load and costs.
Automating All the Code & Testing Things with GitHub Actions
React Advanced 2021React Advanced 2021
19 min
Automating All the Code & Testing Things with GitHub Actions
Top Content
We will learn how to automate code and testing with GitHub Actions, including linting, formatting, testing, and deployments. Automating deployments with scripts and Git hooks can help avoid mistakes. Popular CI-CD frameworks like Jenkins offer powerful orchestration but can be challenging to work with. GitHub Actions are flexible and approachable, allowing for environment setup, testing, deployment, and custom actions. A custom AppleTools Eyes GitHub action simplifies visual testing. Other examples include automating content reminders for sharing old content and tutorials.
Fine-tuning DevOps for People over Perfection
DevOps.js Conf 2022DevOps.js Conf 2022
33 min
Fine-tuning DevOps for People over Perfection
Top Content
DevOps is a journey that varies for each company, and remote work makes transformation challenging. Pull requests can be frustrating and slow, but success stories like Mateo Colia's company show the benefits of deploying every day. Challenges with tools and vulnerabilities require careful consideration and prioritization. Investing in documentation and people is important for efficient workflows and team growth. Trust is more important than excessive control when deploying to production.
The New Next.js App Router
React Summit 2023React Summit 2023
27 min
The New Next.js App Router
Top Content
Watch video: The New Next.js App Router
Today's Talk is about the Next.js App Router, which has evolved over the years and is now a core feature of Next.js. The Talk covers topics such as adding components, fetching remote data, and exploring layouts. It also discusses submitting form data, simplifying code, and reusing components. The App Router allows for coexistence with the existing pages router and enables data fetching at the layout level using React Server Components.

Workshops on related topic

AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Top Content
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Tracing: Frontend Issues With Backend Solutions
React Summit US 2024React Summit US 2024
112 min
Tracing: Frontend Issues With Backend Solutions
Top Content
Featured WorkshopFree
Lazar Nikolov
Sarah Guthals
2 authors
Frontend issues that affect your users are often triggered by backend problems. In this workshop, you’ll learn how to identify issues causing slow web pages and poor Core Web Vitals using tracing.
Then, try it for yourself by setting up Sentry in a ready-made Next.js project to discover performance issues including slow database queries in an interactive pair-programming session.
You’ll leave the workshop being able to:- Find backend issues that might be slowing down your frontend apps- Setup tracing with Sentry in a Next.js project- Debug and fix poor performance issues using tracing
This will be a live 2-hour event where you’ll have the opportunity to code along with us and ask us questions.
From Frontend to Fullstack Development With Next.js
React Summit 2025React Summit 2025
91 min
From Frontend to Fullstack Development With Next.js
Featured Workshop
Eric Burel
Eric Burel
Join us as we journey from React frontend development to fullstack development with Next.js. During this workshop, we'll follow along the official Next.js Learn tutorial with Eric Burel, professional trainer and author of NextPatterns.dev. Together, we'll set up a Next.js website and explore its server-side features to build performant apps.
Build a Headless WordPress App with Next.js and WPGraphQL
React Summit 2022React Summit 2022
173 min
Build a Headless WordPress App with Next.js and WPGraphQL
Top Content
Workshop
Kellen Mace
Kellen Mace
In this workshop, you’ll learn how to build a Next.js app that uses Apollo Client to fetch data from a headless WordPress backend and use it to render the pages of your app. You’ll learn when you should consider a headless WordPress architecture, how to turn a WordPress backend into a GraphQL server, how to compose queries using the GraphiQL IDE, how to colocate GraphQL fragments with your components, and more.
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
Workshop
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
From Todo App to B2B SaaS with Next.js and Clerk
React Summit US 2023React Summit US 2023
153 min
From Todo App to B2B SaaS with Next.js and Clerk
Top Content
WorkshopFree
Dev Agrawal
Dev Agrawal
If you’re like me, you probably have a million side-project ideas, some that could even make you money as a micro SaaS, or could turn out to be the next billion dollar startup. But how do you know which ones? How do you go from an idea into a functioning product that can be put into the hands of paying customers without quitting your job and sinking all of your time and investment into it? How can your solo side-projects compete with applications built by enormous teams and large enterprise companies?
Building rich SaaS products comes with technical challenges like infrastructure, scaling, availability, security, and complicated subsystems like auth and payments. This is why it’s often the already established tech giants who can reasonably build and operate products like that. However, a new generation of devtools are enabling us developers to easily build complete solutions that take advantage of the best cloud infrastructure available, and offer an experience that allows you to rapidly iterate on your ideas for a low cost of $0. They take all the technical challenges of building and operating software products away from you so that you only have to spend your time building the features that your users want, giving you a reasonable chance to compete against the market by staying incredibly agile and responsive to the needs of users.
In this 3 hour workshop you will start with a simple task management application built with React and Next.js and turn it into a scalable and fully functioning SaaS product by integrating a scalable database (PlanetScale), multi-tenant authentication (Clerk), and subscription based payments (Stripe). You will also learn how the principles of agile software development and domain driven design can help you build products quickly and cost-efficiently, and compete with existing solutions.