Tuesday, September 27, 2016

LINUX SUSPEND / RESUME DEBUGGING TECHNIQUES

                       

initcall_debug : Adding the initcall_debug boot option to the kernel cmdline will trace initcalls and the driver pm callbacks during boot, suspend, and resume.

no_console_suspend : Adding the no_console_suspend boot option to the kernel cmdline disables suspending of consoles during suspend/hibernate.

ignore_loglevel :  Adding the ignore_loglevel boot option to the kernel cmdline prints all kernel messages to the console no matter what the current loglevel is, which is useful for debugging.

Serial console : To enable serial console, add console=ttyS0,115200 and no_console_suspend to the kernel cmdline.

Refer http://ayyappa-ch.blogspot.in/2015/07/serial-console-logging.html

Dynamic debug : Dynamic debug is designed to allow you to dynamically enable/disable kernel code to obtain additional kernel information. Currently, if  CONFIG_DYNAMIC_DEBUG is set, then all pr_debug()/dev_debug() calls can be dynamically enabled per-callsite.

Refer https://lwn.net/Articles/434856/

pm_async, pm_test:
Refer https://01.org/blogs/rzhang/2015/best-practice-debug-linux-suspend/hibernate-issues

enable PM_DEBUG, and PM_TRACE:
use a script like this:

#!/bin/sh
sync
echo 1 > /sys/power/pm_trace
echo mem > /sys/power/state ( or use suspend option from GUI)

   to suspend

if it doesn't come back up (which is usually the problem), reboot by holding the power button down, and look at the dmesg output for things like

Magic number: 4:156:725
hash matches drivers/base/power/resume.c:28
hash matches device 0000:01:00.0

which means that the last trace event was just before trying to resume  device 0000:01:00.0. Then   figure out what driver is controlling that device (lspci and /sys/devices/pci* is your friend), and see if you can fix it, disable it, or trace into its resume function.

If no device matches the hash (or any matches appear to be false positives), the culprit may be a device from a loadable kernel module that is not loaded until after the hash is checked. You can check the hash against the current devices again after more modules are loaded using sysfs:

cat /sys/power/pm_trace_dev_match
echo 1 > /sys/power/pm_trace

One of my issue pm_trace_dev_match shows acpi. It means issue exist in BIOS.

 Refer https://www.kernel.org/doc/Documentation/power/s2ram.txt


analyze_suspend : The analyze_syspend tool provides the capability for system developers to visualize the activity between suspend and resume, allowing them to identify inefficiencies and bottlenecks. For example, you can use following command to start:

./analyze_suspend.py -rtcwake 30 -f -m mem
And 30 seconds later the system resumes automatically and generates 3 files in the ./suspend-yymmddyy-hhmmss directory:

mem_dmesg.txt  mem_ftrace.txt  mem.html

You can first open the mem.html file with a browser, and then dig into mem_ftrace.txt for data details. You can get the analyze_suspend tool via git:

git clone https://github.com/01org/suspendresume.git

For more details, go to the homepage: https://01.org/suspendresume.

Test result: mem_dmesg.txt mem_ftrace.txt mem.html


Log Files: https://drive.google.com/drive/folders/0B_UViXaGblZQcHA1Mk1UUzB0V1E

Suspend/Resume Flow:





References:
1) https://01.org/blogs/rzhang/2015/best-practice-debug-linux-suspend/hibernate-issues
2) https://www.kernel.org/doc/Documentation/power/s2ram.txt
3) https://lwn.net/Articles/434856/
4) https://github.com/01org/suspendresume
5) https://wiki.ubuntu.com/DebuggingKernelSuspend

Wednesday, August 3, 2016

Google profiler for Performance and Memory Analysis



Goolge profiler tool instalation for Ubuntu:
sudo apt-get install google-perftools

Analyse memory consumption:
LD_PRELOAD=/usr/lib/libtcmalloc.so.0.0.0 HEAPPROFILE=gpt-heapprofile.log ./your-program

To analyse mmp please set HEAP_PROFILE_MMAP environment variable to TRUE.

Performance analysis:
LD_PRELOAD=/usr/lib/libprofiler.so.0.4.5 CPUPROFILE=/home/amd/gst-log gst-launch-1.0 -f filesrc location= ./1080p_H264.mp4 ! qtdemux ! h264parse ! vaapidecode ! filesink location= test.yuv


Convert Data to pdf format:
google-pprof --pdf  /usr/bin/python /home/amd/gst-log >  profile_output.pdf

Text output can be obtained by typing:
google-pprof --text /usr/bin/python /home/amd/gst-log > profiling_output.txt

The file "/home/amd/gst-log" can also be analyzed with some specific graphical interfaces like "kcachegrind".

To prepare de data for kcachegrind type:
   google-pprof --callgrind /usr/bin/python /home/amd/gst-log > profiling_kcachegrind.txt
 
visualize the information use kcachegrind:
   kcachegrind profiling_kcachegrind.txt &

Example view of Performance Analysis:





References:
http://goog-perftools.sourceforge.net/doc/cpu_profiler.html

http://alexott.net/en/writings/prog-checking/GooglePT.html

http://kratos-wiki.cimne.upc.edu/index.php/How_to_Profile_an_application_(using_google-perftools)

http://stackoverflow.com/questions/10874308/how-to-use-google-perf-tools




Tuesday, August 2, 2016

Debugging using gdb Tracepoints


Trace command: 
The trace command is very similar to the break command. Its argument can be a source line, a function name, or an address in the target program.

The trace command defines a tracepoint, which is a point in the target program where the debugger will briefly stop, collect some data, and then allow the program
to continue.

Setting a tracepoint or changing its commands doesn't take effect until the next tstart command.

(gdb) trace foo.c:121    // a source file and line number

(gdb) trace +2           // 2 lines forward

(gdb) trace my_function  // first source line of function

(gdb) trace *my_function // EXACT start address of function

(gdb) trace *0x2117c4    // an address

(gdb) delete trace 1 2 3 // remove three tracepoints

(gdb) delete trace       // remove all tracepoints

(gdb) info trace        // trace points info


Starting and Stopping Trace Experiment:

tstart : It starts the trace experiment, and begins collecting data.

tstop :  It ends the trace experiment, and stops collecting data.

tstatus : This command displays the status of the current trace data collection.


Enable and Disable Tracepoints:

disable tracepoint [num] : Disable tracepoint num, or all tracepoints if no argument num is given.

enable tracepoint [num] : Enable tracepoint num, or all tracepoints.


Tracepoint Passcounts:

passcount [n [num]] : Set the passcount of a tracepoint. The passcount is a way to automatically stop a trace experiment. If a tracepoint's passcount is n,  then the trace experiment will be automatically stopped on the n'th time that tracepoint is hit. If the tracepoint number num is not specified, the passcount command sets the passcount of the most recently defined tracepoint. If no passcount is given, the trace experiment will run until stopped explicitly by the user.

Examples:
(gdb) passcount 5 2 // Stop on the 5th execution of  tracepoint 2

(gdb) passcount 12  // Stop on the 12th execution of the most recently defined tracepoint.
(gdb) trace foo
(gdb) pass 3
(gdb) trace bar
(gdb) pass 2
(gdb) trace baz
(gdb) pass 1        // Stop tracing when foo has been
                           // executed 3 times OR when bar has
                           // been executed 2 times
                           // OR when baz has been executed 1 time.


Tracepoint Action Lists:

actions [num] : This command will prompt for a list of actions to be taken when the tracepoint is hit. If the tracepoint number num is not specified, this command sets the actions for the one that was most recently defined . You specify the actions themselves on the following lines, one action at a time,
and terminate the actions list with a line containing just end.

(gdb) collect data // collect some data

(gdb) while-stepping 5 // single-step 5 times, collect data

(gdb) end              // signals the end of actions.


collect expr1, expr2, ...
Collect values of the given expressions when the tracepoint is hit. This command accepts a comma-separated list of any valid expressions.

In addition to global, static, or local variables, the following special arguments are supported:

$regs
collect all registers
$args
collect all function arguments
$locals
collect all local variables.

Example:

(gdb) trace gdb_c_test
(gdb) actions
Enter actions for tracepoint #1, one per line.
> collect $regs,$locals,$args
> while-stepping 11
  > collect $regs
  > end
> end
(gdb) tstart
[time passes ...]
(gdb) tstop


Using the collected data:

tfind start
Find the first snapshot in the buffer. This is a synonym for tfind 0 (since 0 is the number of the first snapshot).
tfind none
Stop debugging trace snapshots, resume live debugging.
tfind end
Same as `tfind none'.
tfind
No argument means find the next trace snapshot.

The tracepoint facility is currently available only for remote targets.





Reference Link:

ftp://ftp.gnu.org/old-gnu/Manuals/gdb/html_chapter/gdb_10.html

http://stackoverflow.com/questions/3691394/gdb-meaning-of-tstart-error-you-cant-do-that-when-your-target-is-exec

http://stackoverflow.com/questions/38716790/gdb-meaning-of-tstop-error-you-cant-do-that-when-your-target-is-multi-thread

Linux Kernel Tracepoints , TRACE_EVENT() macro and Perf Tool



Why Tracepoints needed:

It is not feasible for the debugger to interrupt the program's execution long enough for the developer to learn anything helpful about its behavior. If the program's correctness depends on its real-time behavior, delays introduced by a debugger might cause the program to change its behavior drastically, or perhaps fail, even when the code itself is correct. It is useful to be able to observe the program's behavior without interrupting it.

What is Trace Points:

A tracepoint placed in code provides a hook to call a function that you can provide at runtime.

A tracepoint can be "on" or "off".

When a tracepoint is "off" it has no effect, except for adding a tiny time penalty and space penalty .

When a tracepoint is "on", the function you provide is called each time the tracepoint is executed, in the execution context of the caller.

When the function provided ends its execution, it returns to the caller.

You can put tracepoints at important locations in the code.

Unlike Ftracer , Trace Point can record local variables of the function.

The tracepoint included a function call in the kernel code that, when enabled, would call a callback function passing the parameters of the tracepoint to that function as if the callback function was called with those parameters.

TRACE_EVENT() macro was specifically made to allow a developer to add tracepoints to their subsystem and have Ftrace automatically be able to trace them.

The anatomy of the TRACE_EVENT() macro:
It must create a tracepoint that can be placed in the kernel code.

It must create a callback function that can be hooked to this tracepoint.

The callback function must be able to record the data passed to it into the tracer ring buffer in the fastest way possible.

It must create a function that can parse the data recorded to the ring buffer and translate it to a human readable format that the tracer can display to a user.


Playing with trace events:

cd /sys/kernel/debug/tracing/events

root@amd-PADEMELON:/sys/kernel/debug/tracing/events/drm# ls
drm_vblank_event  drm_vblank_event_delivered  drm_vblank_event_queued  enable  filter

echo 1 > ./drm/enable

The enable files are used to enable a tracepoint. The enable file in the events directory can enable or disable all events in the system, the enable file in one of the system's directories can enable or disable all events within the system, and the enable file within the specific event  directory can enable or disable that event.


Tracepoint logs can be seen with Ftrace logs :

 3) + 20.320 us   |  dm_crtc_high_irq [amdgpu]();
 0)               |  /* drm_vblank_event_queued: pid=2430, crtc=0, seq=297608 */
 0)               |  send_vblank_event [drm]() {
 0)               |  /* drm_vblank_event_delivered: pid=2430, crtc=0, seq=297608 */
 0)   3.858 us    |  }
 3) + 24.783 us   |  dm_crtc_high_irq [amdgpu]();
 3)               |  dm_pflip_high_irq [amdgpu]() {
 3)               |    drm_send_vblank_event [drm]() {
 3)               |      send_vblank_event [drm]() {
 3)               |        /* drm_vblank_event_delivered: pid=0, crtc=0, seq=297609 */
 3) + 33.556 us   |      }
 3) + 35.227 us   |    }


 We can set requitred events using set_event . It is same as enaabing specific event using enable.

 [tracing] # echo drm_vblank_event drm_vblank_event_delivered drm_vblank_event_queued > set_event


PERF TOOL:
One of the key secrets for quick use of tracepoints is the perf tool . CONFIG_EVENT_PROFILE configuration option should be set.

perf will be available at ./kernel/tools/perf

$ perf list -> List of events available in the system

$ perf stat -a -e kmem:kmalloc sleep 10  -> How many kmalloc() calls are happening on a system

The -a option gives whole-system results

$ perf stat -e kmem:kmalloc make  -> Monitoring allocations during the building of the perf tool




https://www.kernel.org/doc/Documentation/trace/tracepoints.txt - Kernel TracePoint
http://lwn.net/Articles/379903/
http://lwn.net/Articles/381064/
http://lwn.net/Articles/383362/
ftp://ftp.gnu.org/old-gnu/Manuals/gdb/html_chapter/gdb_10.html - GDB Tracepoints usage
https://lwn.net/Articles/346470/  - PERF TOOL


Tuesday, July 26, 2016

The Beauty of Strace

Strace is a diagnostic, debugging and instructional user-space utility for Linux. It is used to monitor interactions between processes and the Linux kernel, which include system calls, signal deliveries, and changes of process state.

The following is an example of typical output of the strace command:

ioctl(13, DRM_IOCTL_TEGRA_GEM_MMAP, 0x7ffd5182d670) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_SHARED, 13, 0x137e48000) = 0x7f83d5cba000
munmap(0x7f83d5cba000, 8192)            = 0
ioctl(13, _IOC(_IOC_READ|_IOC_WRITE, 0x64, 0x43, 0x18), 0x7ffd5182d640) = 0
ioctl(13, _IOC(_IOC_READ|_IOC_WRITE, 0x64, 0x49, 0x20), 0x7ffd5182d630) = 0
ioctl(13, _IOC(_IOC_READ|_IOC_WRITE, 0x64, 0x44, 0x18), 0x7ffd5182d620) = 0
ioctl(13, _IOC(_IOC_READ|_IOC_WRITE, 0x64, 0x43, 0x18), 0x7ffd5182d670) = 0
write(10, "\0", 1)                      = 1
futex(0x5590ec432890, FUTEX_WAKE_PRIVATE, 1) = 1
write(10, "\0", 1)                      = 1
futex(0x5590ec432890, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x5590ec418784, FUTEX_CMP_REQUEUE_PRIVATE, 1, 2147483647, 0x5590ec418758, 52) = 1

user space hang issues can be analysed using Strace.

Strace will give detailed info about which modules get loaded for running the program.

Missing modules and libraries can be easily fixed with Strace.

Futext wait issues can be analysed using Strace.

How to do Strace:

$ strace  glxgears

$ strace  mpv --hwdec=vdpau -vo vdpau "filename"

$ strace -f  mpv "filename"

$ strace -p pid

$strace -f -p pid

-f = Trace child processes

For  Help:

$ man strace

# strace  -h

To get pid's of multiple threads running in a common process 

$ ps -efL | grep mpv (process name)


How to get call stack using gdb for all threads:

$ gdb -p <process pid>

(gdb) thread apply all bt    =>  gives all threads call stack under attached process.

Wednesday, July 20, 2016

Debugging SUSPEND / RESUME issues on Linux

This blog contains most of the information for suspend resume issues :
https://01.org/blogs/rzhang/2015/best-practice-debug-linux-suspend/hibernate-issues

Some more information:

Thursday, March 3, 2016

Seqlock Vs RCU Vs Per-CPU

Seqlock:

A seqlock is a special locking mechanism used in Linux for supporting fast writes of shared variables between two parallel operating system routines.

It is a reader-writer consistent mechanism which avoids the problem of writer starvation.

A seqlock consists of storage for saving a sequence number in addition to a lock.

The lock is to support synchronization between two writers and the counter is for indicating consistency in readers.

Whenever writer updates the shared data, it increments the  sequence number, both after acquiring the lock and before releasing the  lock. Reader will read the sequence number before and after reading the  shared data. If the sequence number is odd on either occasion, it means the lock is acquired by the writer while the reader was reading data and data may have changed. And if the sequence numbers are different, a writer has changed the  data while it was being read. In either case readers retry, until they read the same even sequence number before and  after.

The reader never blocks, but it may have to retry if a write is in progress; this speeds up the readers in the case where the data was not modified, since they do not have to acquire the lock as they would with a traditional read-write lock.

Also, writers do not wait for readers, whereas with traditional read-write locks they do, leading to potential resource starvation in a situation where there are a number of readers (because the writer must wait for there to be no readers).

Because of these two factors, seqlocks are more efficient than traditional read-write locks for the situation where there are many readers and few writers.

The drawback is that if there is too much write activity or the reader is too slow, they might livelock (and the readers may starve).

It should also be noted that the technique will not work for data that contains pointers, because any writer could invalidate a pointer that a reader has already followed. In this case, using read-copy-update synchronization is preferred.


RCU (Read Copy Update):

RCU Locks allows reads to occur concurrently with updates. Let us have a pointer ptr pointing to the shared data. Reader works by reading the data pointed by the ptr and Update is done by first allocating data and then setting the value of ptr to the newly allocated data.

As you can see, setting value of a location in memory is an atomic operation so, no locks are required. But there are other concerns. At any time more than one readers could be reading the data pointed by the same pointer, in that case freeing this data would give some unexpected results. So, RCU ensures that data are not freed up until all pre-existing read-side critical sections complete.

RCU is made up of three fundamental mechanisms:
Publish Subscripe Mechanism (for insertion)
Wait for Pre-Existing RCU readers to complete (for deletion)

Maintain multiple versions of recently update objects (to allow readers to tolerate concurrent insertions and deletions)

With RCU, you can concurrently update data containing pointers but in SeqLock you cannot. Because, it may happen the reader has dereferenced the pointer and reading data pointed by it but in the middle of this process writer just invalidate it.

It may seem to you that in Seqlock and in RCU both readers and writers can work concurrently?
Well, no. In Seqlock whenever a reading is working and a writer steps in then according to the protocol reader has to retry because data may have been changed by the writer in that time. So, both reader and writer can concurrently run but cannot work concurrently.
In contrast, RCU readers can perform useful work even in presence of concurrent RCU updaters.
Although, in Seqlock writers requires locking but in RCU both readers and writers can altogether avoid it. Both of them works best when there are few writers but more readers.


How can the updater tell when a grace period has completed if the RCU readers give no indication when they are done?

Just as with spinlocks, RCU readers are not permitted to block, switch to user-mode execution, or enter the idle loop. herefore, as soon as a CPU is seen passing through any of these three states, we know that that CPU has exited any previous RCU read-side critical sections.  So, if we remove an item from a linked list, and then wait until all CPUs have switched context, executed in user mode, or executed in the idle loop, we can safely free up that item.

Reference :
1) https://www.kernel.org/pub/linux/kernel/people/rusty/kernel-locking/x490.html
2) https://www.kernel.org/doc/Documentation/RCU/rcu.txt
3)https://www.kernel.org/doc/Documentation/RCU/listRCU.txt
4)http://www.rdrop.com/users/paulmck/RCU/

Tuesday, February 23, 2016

The beauty of FTRACE

The kernel configuration options  need to be enabled for Ftrace:
CONFIG_FUNCTION_TRACER
CONFIG_FUNCTION_GRAPH_TRACER
CONFIG_STACK_TRACER
CONFIG_DYNAMIC_FTRACE

Ftrace Sys path :
[~]# cd /sys/kernel/debug/tracing
[tracing]#

Stack Tracing:
The stack tracer checks the size of the stack at every function call. If it is greater than the last recorded maximum, it records the stack trace and updates the maximum with the new size. To see the current maximum, look at the stack_max_size file.

[tracing]# echo 1 > /proc/sys/kernel/stack_tracer_enabled
[tracing]# cat stack_max_size
2928
[tracing]# cat stack_trace


List of available tracers:
[tracing]# cat available_tracers 
function_graph function sched_switch nop

Setting current_tracer:
[tracing]# echo function > current_tracer
[tracing]# cat current_tracer
function

Setting trace buffer size:
[tracing]# echo 50 > buffer_size_kb

Adding module for ftrace filter:
[tracing]# echo ':mod:amdgpu' > set_ftrace_filter

This will ignore existing modules if any added and add all the functions available in amdgpu module for tracing.

Adding module for ftrace filter:
[tracing]# echo ':mod:ttm' >> set_ftrace_filter

note the '>>' is used. It will add ttm module to the existing list of modules. 

Adding set of functions start with specific name for Tracing:
[tracing]# echo 'sched*' > set_ftrace_filter
[tracing]# echo 'schedule:traceoff' >> set_ftrace_filter

All function names  start with sched are added for Tracing.

Adding specific pid for Tracing:
[tracing]# echo $$ > set_ftrace_pid

View Function graph for particular function:
[tracing]# echo kfree > set_graph_function
[tracing]# echo function_graph > current_tracer
[tracing]# cat trace

It will display the function flow for kfree.

   
Removing unwanted function contain specific name:
[tracing]# echo '!*lock*' >> set_ftrace_filter


The '!' symbol will remove functions listed in the filter file. As shown above, the '!' works with wildcards, but could also be used with a single function. Since '!' has special meaning in bash it must be wrapped with single quotes or bash will try to execute what follows it. Also note the '>>' is used. If you make the mistake of using '>' you will end up with no functions in the filter file.


References:
http://lwn.net/Articles/365835/  - ftrace part1
https://lwn.net/Articles/366796/  - ftrace part2
https://lwn.net/Articles/370423/  - ftrace secrets

Sunday, February 21, 2016

Spin-lock usage with respect to Process, Bottom Half and Top Half Context

For kernels compiled without CONFIG_SMP, and without CONFIG_PREEMPT spinlocks do not exist at all. when no-one else can run at the same time, there is no reason to have a lock.

If the kernel is compiled without CONFIG_SMP, but CONFIG_PREEMPT is set, then spinlocks simply disable preemption, which is sufficient to prevent any races.

Linux guarantees the same interrupt will not be re-entered.


spin_lock(lock):
=>  Acquire the spin lock

=> Under certain circumstances, it is not necessary to disable local interrupts. For example, most filesystems only access their data structures from process context and acquire their spinlocks by calling spin lock(lock).

=> If another tasklet/timer wants to share data with your tasklet or timer , you will both need to use spin_lock() and spin_unlock() calls. spin_lock_bh() is unnecessary here, as you are already in a tasklet, and none will be run on the same CPU.


spin_lock_irq(lock)  :
=> Disable interrupts on local CPU
=> acquire the spin lock

=> If the code in process context is holding a spinlock and the code in interrupt context attempts to acquire the same spinlock, it will spin forever. For this reason, it is recommended that spin_lock_irq() is always used.

=> Data sharing between interrupt context and softirq or tasklet or process context needs to protect with spin_lock_irq().
 

spin_lock_irqsave(lock , flags) :
=> saves current interrupt state into flags
=> Disable interrupts on local CPU
=> acquire the spin lock

=> Sharing data bwtween two Hard IRQ Handlers ( interrupt contextes) use this locking technique

=> same code can be used inside an hard irq handler (where interrupts are already off) and in softirqs (where the irq disabling is required).


spin_lock_bh(lock):
=> Disbale softirq on current CPU
=> acquire the spin lock

=> If a data structure is accessed only from process and bottom half context, spin lock bh() can be used instead. This optimisation allows interrupts to come in while the spinlock is held, but doesn’t allow bottom halves to run on exit from the interrupt routine; they will be deferred until the spin unlock bh().

=> If another tasklet/timer wants to share data with your tasklet or timer , you will both need to use spin_lock() and spin_unlock()  calls. spin_lock_bh() is unnecessary here, as you are already in a tasklet, and none will be run on the same CPU.
 
 
Locking between same softirq sharing data :
The same softirq can run on the other CPUs: you can use a per-CPU array for better performance. If you're going so far as to use a softirq, you probably care about scalable performance enough to justify the extra complexity.You'll need to use spin_lock() and spin_unlock() for shared data.


Locking Between Hard IRQ and Softirqs/Tasklets:
If a hardware irq handler shares data with a softirq, you have two concerns. Firstly, the softirq processing can be interrupted by a hardware interrupt, and secondly, the critical region could be entered by a hardware interrupt on another CPU. This is where spin_lock_irq() is used. It is defined to disable interrupts on that cpu, then grab the lock. spin_unlock_irq() does the reverse.

The irq handler does not to use spin_lock_irq(), because the softirq cannot run while the irq handler is running: it can use spin_lock(), which is slightly faster. The only exception would be if a different hardware irq handler uses the same lock: spin_lock_irq() will stop that from interrupting us.



Saturday, February 20, 2016

Deadlock Vs Livelock Vs Starvation

Deadlock: A situation in which two or more processes are unable to proceed because each is waiting for one the others to do something.

For example, consider two processes, P1 and P2, and two resources, R1 and R2. Suppose that each process needs access to both resources to perform part of its function. Then it is possible to have the following situation: the OS assigns R1 to P2, and R2 to P1. Each process is waiting for one of the two resources. Neither will release the resource that it already owns until it has acquired the other resource and performed the function requiring both resources. The two processes are deadlocked

Livelock: A situation in which two or more processes continuously change their states in response to changes in the other process(es) without doing any useful work:

For example , consider two processes each waiting for a resource the other has but waiting in a non-blocking manner. When each learns they cannot continue they release their held resource and sleep for some times, then they retrieve their original resource followed by trying to the resource the other process held, then left, then reacquired. Since both processes are trying to cope (just badly), this is a livelock.

Starvation: A situation in which a runnable process is overlooked indefinitely by the scheduler; although it is able to proceed, it is never chosen.

For example , consider three processes (P1, P2, P3) each require periodic access to resource R. Consider the situation in which P1 is in possession of the resource, and both P2 and P3 are delayed, waiting for that resource. When P1 exits its critical section, either P2 or P3 should be allowed access to R. Assume that the OS grants access to P3 and that P1 again requires access before P3 completes its critical section. If the OS grants access to P1 after P3 has finished, and subsequently alternately grants access to P1 and P3, then P2 may indefinitely be denied access to the resource, even though there is no deadlock situation.