The skill of communication is about comprehension as much as it is about explanation.

It depends neither on language, nor grammar, nor spelling, nor pronunciation, nor all combined.

It is beyond all these... they just help reducing the ambiguities!

# Technology Philosophy Blog

### Linux: behavior of symbolic link and argv[0]; busybox and multiarch_wrapper

~# cat a.c

#include <stdio.h>

int main(int argc, char** argv)

{

printf("%s\n", argv[0]);

}

~# gcc a.c

~# ln -s a.out kk

~# ./kk

./kk

~#

Program prints argv[0]. But we did kk -> a.out.

Notice that whe we execute ./kk,

argv[0] is not ./a.out, but ./kk !

This idea is used in busybox and

to call different architecture dependent binaries

depending on USE_ARCH environment variable.

http://www.embeddedlinux.org.cn/EmbeddedLinuxPrimer/0136130550/ch11lev1sec3.html

http://www.clfs.org/view/CLFS-3.0.0-SYSVINIT/x86_64/final-system/multiarch_wrapper.html

#include <stdio.h>

int main(int argc, char** argv)

{

printf("%s\n", argv[0]);

}

~# gcc a.c

~# ln -s a.out kk

~# ./kk

./kk

~#

Program prints argv[0]. But we did kk -> a.out.

Notice that whe we execute ./kk,

argv[0] is not ./a.out, but ./kk !

This idea is used in busybox and

to call different architecture dependent binaries

depending on USE_ARCH environment variable.

http://www.embeddedlinux.org.cn/EmbeddedLinuxPrimer/0136130550/ch11lev1sec3.html

http://www.clfs.org/view/CLFS-3.0.0-SYSVINIT/x86_64/final-system/multiarch_wrapper.html

### Linux: Erase/clean MBR(partition table included) of a (pen)drive/block device using dd command

MBR size is (in bytes): 446 + 64 + 2 = 512

446 bytes - Bootstrap code.

64 bytes - Partition table.

2 bytes - Signature.

So

# dd if=/dev/zero of=/dev/sdb bs=512 count=1

cleans everything in the MBR.

446 bytes - Bootstrap code.

64 bytes - Partition table.

2 bytes - Signature.

So

# dd if=/dev/zero of=/dev/sdb bs=512 count=1

cleans everything in the MBR.

### Dynamic programming Vs Divide and Conquer

Dynamic programming is a method for solving a bigger problem by breaking it down into smaller subproblems, solving each of those subproblems only once and storing their solutions. Next time the same subproblem occurs, instead of recomputing its solution, we simply look up the previously stored solution, hence saving computation time at the expense of some amount storage space. Each of the subproblem's solution is indexed/hashed in some way, normally based on the value of its input parameters, so as to facilitate lookup. The technique of storing solutions to subproblems instead of recomputing them is called "memoization". It is an optimization technique.

Divide and Conquer works in these steps.

Divide: Divide the main problem into small subproblems.

Conquer: Solve those subproblems recursively.

Combine: Combine results of subproblems to compute result of main problem.

Dynamic Programming is similar to Divide and Conquer when it comes to dividing bigger problems into smaller subproblems. But here, each subproblem is solved only once. In Divide and Conquer, the subproblems are independent of each other while in case of Dynamic Programming, subproblems are not independent of each other. Solution of one subproblem may be required to solve another subproblem. That is why in Dynamic programming solutions to subproblems are remembered. We store the result of subproblems in a table so that we don't have to compute the result of a subproblem again and again.

Divide and Conquer example: Merge Sort

Dynamic Programming example: Tower of Hanoi, Nth Fibonacci number

function fib(n)

if n <= 1 return n

return fib(n − 1) + fib(n − 2)

Divide and Conquer works in these steps.

Divide: Divide the main problem into small subproblems.

Conquer: Solve those subproblems recursively.

Combine: Combine results of subproblems to compute result of main problem.

Dynamic Programming is similar to Divide and Conquer when it comes to dividing bigger problems into smaller subproblems. But here, each subproblem is solved only once. In Divide and Conquer, the subproblems are independent of each other while in case of Dynamic Programming, subproblems are not independent of each other. Solution of one subproblem may be required to solve another subproblem. That is why in Dynamic programming solutions to subproblems are remembered. We store the result of subproblems in a table so that we don't have to compute the result of a subproblem again and again.

Divide and Conquer example: Merge Sort

Dynamic Programming example: Tower of Hanoi, Nth Fibonacci number

function fib(n)

if n <= 1 return n

return fib(n − 1) + fib(n − 2)

### Jon Bentley on code correctness.

"If it doesn't have to be correct, I can make it as fast as you'd like it to be" - Jon Bentley

### Big Omega, Big Theta and Big Omicron.

In code complexity measure,

Big Omega gives us asymptotic lower bound function g(n).

Big Theta gives us asymptotically tight bound function g(n), i.e. both lower and upper bound being c1.g(n) and c2.g(n) respectively.

Big Omicron gives us asymptotic upper bound function g(n).

Most of the time we talk about Big Oh. That is actually Big Omicron.

[The character O is the upper-case Greek letter Omicron, not English letter O]

And surprisingly, when we say, complexity of a code is O(N^2) etc,

we really mean Big Theta! We not only mean, N^2 being the upper bound,

but also we mean, N^2 is the lower bound too. Isn't it?

Big Omega gives us asymptotic lower bound function g(n).

Big Theta gives us asymptotically tight bound function g(n), i.e. both lower and upper bound being c1.g(n) and c2.g(n) respectively.

Big Omicron gives us asymptotic upper bound function g(n).

Most of the time we talk about Big Oh. That is actually Big Omicron.

[The character O is the upper-case Greek letter Omicron, not English letter O]

And surprisingly, when we say, complexity of a code is O(N^2) etc,

we really mean Big Theta! We not only mean, N^2 being the upper bound,

but also we mean, N^2 is the lower bound too. Isn't it?

Subscribe to:
Posts (Atom)