Pavan Manjunath Pavan Manjunath Unfortunately, all the files described there are only available if the APANIC module works correctly - which is what I am trying to achieve I want to get the kernel logs after a kernel panic occured, i. If you want to figure out what caused a kernel panic, then I think the kernel logs leading to the panic should be of interest to you. Sign up or log in Sign up using Google.
Sign up using Facebook. Sign up using Email and Password. Post as a guest Name. Email Required, but never shown. The Overflow Blog. Podcast Explaining the semiconductor shortage, and how it might end. Does ES6 make JavaScript frameworks obsolete? Featured on Meta. Now live: A fully responsive profile.
Linked Related Modern kernels support process isolation and resource allocation based on cgroups. Targets are groups of units. Targets call units to put the system together. For instance, graphical. Targets can build on top of another or depend on other targets. At boot time, systemd activates the target default. Systemd creates and manages the sockets used for communication between system components.
Secondly, crashed services can be restarted without processes that communicate via sockets with them losing their connection.
The kernel will buffer the communication while the process restarts. Please see the upstream systemd page for more information. Basic usage systemctl is the main tool used to introspect and control the state of the "systemd" system and service manager. Refer to the systemctl 1 manpage for more details. The type of the unit is recognized by the file name suffix,. Some units are created by systemd without a unit file existing in the file system.
The audit2allow tool takes dmesg denials and converts them into corresponding SELinux policy statements. As such, it can greatly speed SELinux development. To use it, run:. Nevertheless, care must be taken to examine each potential addition for overreaching permissions. This would grant rmt the ability to write kernel memory, a glaring security hole. Often the audit2allow statements are only a starting point.
After employing these statements, you may need to change the source domain and the label of the target, as well as incorporate proper macros, to arrive at a good policy. Sometimes the denial being examined should not result in any policy changes at all; rather the offending application should be changed. Content and code samples on this page are subject to the licenses described in the Content License.
Docs Getting Started About. Core Topics Architecture. Overview Security Overview. Android Security Bulletins. Android Automotive. Kubernetes auditing provides a security-relevant, chronological set of records documenting the sequence of actions in a cluster. The cluster audits the activities generated by users, by applications that use the Kubernetes API, and by the control plane itself.
Audit records begin their lifecycle inside the kube-apiserver component. Each request on each stage of its execution generates an audit event, which is then pre-processed according to a certain policy and written to a backend. The policy determines what's recorded and the backends persist the records.
The current backend implementations include logs files and webhooks. The audit logging feature increases the memory consumption of the API server because some context required for auditing is stored for each request. Memory consumption depends on the audit logging configuration. Audit policy defines rules about what events should be recorded and what data they should include. The audit policy object structure is defined in the audit.
When an event is processed, it's compared against the list of rules in order. The first matching rule sets the audit level of the event. The defined audit levels are:. You can pass a file with the policy to kube-apiserver using the --audit-policy-file flag. If the flag is omitted, no events are logged. Note that the rules field must be provided in the audit policy file.
A policy with no 0 rules is treated as illegal. If you're crafting your own audit profile, you can use the audit profile for Google Container-Optimized OS as a starting point. You can check the configure-helper.
You can see most of the audit policy file by looking directly at the script. You can also refer to the Policy configuration reference for details about the fields defined. Audit backends persist audit events to an external storage. Out of the box, the kube-apiserver provides two backends:.
In all cases, audit events follow a structure defined by the Kubernetes API in the audit. The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags:. If your cluster's control plane runs the kube-apiserver as a Pod, remember to mount the hostPath to the location of the policy file and log file, so that audit records are persisted.
The webhook audit backend sends audit events to a remote web API, which is assumed to be a form of the Kubernetes API, including means of authentication. You can configure a webhook audit backend using the following kube-apiserver flags:. The webhook config file uses the kubeconfig format to specify the remote address of the service and credentials used to connect to it. Both log and webhook backends support batching. Using webhook as an example, here's the list of available flags.
To get the same flag for log backend, replace webhook with log in the flag name. By default, batching is enabled in webhook and disabled in log. Similarly, by default throttling is enabled in webhook and disabled in log.
Assuming that there are up to events in a batch, you should set throttling level at least 2 queries per second. Assuming that the backend can take up to 5 seconds to write events, you should set the buffer size to hold up to 5 seconds of events; that is: 10 batches, or events. In most cases however, the default parameters should be sufficient and you don't have to worry about setting them manually.
You can look at the following Prometheus metrics exposed by kube-apiserver and in the logs to monitor the state of the auditing subsystem. Both log and webhook backends support limiting the size of events that are logged. As an example, the following is the list of flags available for the log backend:. By default truncate is disabled in both webhook and log , a cluster administrator should set audit-log-truncate-enabled or audit-webhook-truncate-enabled to enable the feature.
If you find that any Pods listed are in Unknown or Terminating state for an extended period of time, refer to the Deleting StatefulSet Pods task for instructions on how to deal with them. Learn more about debugging an init-container. This page shows how to investigate problems related to the execution of Init Containers. You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster.
It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:. See Understanding Pod status for more examples of status values and their meanings. You can also access the Init Container statuses programmatically by reading the status. Init Containers that run a shell script print commands as they're executed.
For example, you can do this in Bash by running set -x at the beginning of the script. A Pod status beginning with Init: summarizes the status of Init Container execution. The table below describes some example status values that you might see while debugging Init Containers.
The first step in debugging a pod is taking a look at it. Check the current state of the pod and recent events with the following command:. Look at the state of the containers in the pod. Are they all Running? Have there been recent restarts? If a pod is stuck in Pending it means that it can not be scheduled onto a node.
Generally this is because there are insufficient resources of one type or another that prevent scheduling. Look at the output of the kubectl describe There should be messages from the scheduler about why it can not schedule your pod. Reasons include:. You may have exhausted the supply of CPU or Memory in your cluster. In this case you can try several things:.
Terminate unneeded pods to make room for pending pods. Check that the pod is not larger than your nodes. For example, if all nodes have a capacity of cpu:1 , then a pod with a request of cpu: 1.
Here are some example command lines that extract the necessary information:. The resource quota feature can be configured to limit the total amount of resources that can be consumed.
If used in conjunction with namespaces, it can prevent one team from hogging all the resources. When you bind a pod to a hostPort there are a limited number of places that the pod can be scheduled.
In most cases, hostPort is unnecessary; try using a service object to expose your pod. If you do require hostPort then you can only schedule as many pods as there are nodes in your container cluster. If a pod is stuck in the Waiting state, then it has been scheduled to a worker node, but it can't run on that machine.
Again, the information from kubectl describe The most common cause of Waiting pods is a failure to pull the image. There are three things to check:. Once your pod has been scheduled, the methods described in Debug Running Pods are available for debugging. ReplicationControllers are fairly straightforward. They can either create pods or they can't. If they can't create pods, then please refer to the instructions above to debug your pods.
If your container has previously crashed, you can access the previous container's crash log with:. If the container image includes debugging utilities, as is the case with images built from Linux and Windows OS base images, you can run commands inside a specific container with kubectl exec :. You can run a shell that's connected to your terminal using the -i and -t arguments to kubectl exec , for example:. For more details, see Get a Shell to a Running Container.
Ephemeral containers are useful for interactive troubleshooting when kubectl exec is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with distroless images. You can use the kubectl debug command to add ephemeral containers to a running Pod. First, create a pod for the example:. The examples in this section use the pause container image because it does not contain debugging utilities, but this method works with all container images.
If you attempt to use kubectl exec to create a shell you will see an error because there is no shell in this container image. You can instead add a debugging container using kubectl debug. This command adds a new busybox container and attaches to it. The --target parameter targets the process namespace of another container. It's necessary here because kubectl run does not enable process namespace sharing in the pod it creates.
You can view the state of the newly created ephemeral container using kubectl describe :. Sometimes Pod configuration options make it difficult to troubleshoot in certain situations. For example, you can't run kubectl exec to troubleshoot your container if your container image does not include a shell or if your application crashes on startup.
In these situations you can use kubectl debug to create a copy of the Pod with configuration values changed to aid debugging. Adding a new container can be useful when your application is running but not behaving as you expect and you'd like to add additional troubleshooting utilities to the Pod.
For example, maybe your application's container images are built on busybox but you need debugging utilities not included in busybox. You can simulate this scenario using kubectl run :. Run this command to create a copy of myapp named myapp-debug that adds a new Ubuntu container for debugging:. Sometimes it's useful to change the command for a container, for example to add a debugging flag or because the application is crashing.
To simulate a crashing application, use kubectl run to create a container that immediately exits:. You can use kubectl debug to create a copy of this Pod with the command changed to an interactive shell:. Now you have an interactive shell that you can use to perform tasks like checking filesystem paths or running the container command manually.
In some situations you may want to change a misbehaving Pod from its normal production container images to an image containing a debugging build or additional utilities. Now use kubectl debug to make a copy and change its container image to ubuntu :. If none of these approaches work, you can find the Node on which the Pod is running and create a privileged Pod running in the host namespaces.
To create an interactive shell on a node using kubectl debug , run:. An issue that comes up rather frequently for new installations of Kubernetes is that a Service is not working properly.
You've run your Pods through a Deployment or other workload controller and created a Service, but you get no response when you try to access it. This document will hopefully help you to figure out what's going wrong. For many steps here you will want to see what a Pod running in the cluster sees.
The simplest way to do this is to run an interactive busybox Pod:. For the purposes of this walk-through, let's run some Pods. Since you're probably debugging your own Service you can substitute your own details, or you can follow along and get a second data point. The label "app" is automatically set by kubectl create deployment to the name of the Deployment. You can also confirm that your Pods are serving.
You can get the list of Pod IP addresses and test them directly. The example container used for this walk-through serves its own hostname via HTTP on port , but if you are debugging your own app, you'll want to use whatever port number your Pods are listening on. If you are not getting the responses you expect at this point, your Pods might not be healthy or might not be listening on the port you think they are. You might find kubectl logs to be useful for seeing what is happening, or perhaps you need to kubectl exec directly into your Pods and debug from there.
Assuming everything has gone to plan so far, you can start to investigate why your Service doesn't work. The astute reader will have noticed that you did not actually create a Service yet - that is intentional.
0コメント