Software resources for all

  • Datagram sockets use UDP which provides connectionless communication with minimal overheads making it suitable for applications that need speed.
  • Client server model is prevalent in distributed systems. Network socket programming calls are explained with example programs in C using the TCP.
  • Signals are software interrupts that are delivered to a process by the kernel. A signal is a notification about an event that has occurred.
  • systemd is the system and service manager for Linux based systems. systemd concepts, configuration directives and commands are explained.
  • Syslog is a protocol for conveying event notification messages. Syslog concepts and calls for logging messages are explained.
  • D-Bus is a mechanism for interprocess communication for Linux systems. D-Bus concepts along with example client-server programs are explained.
  • Linux commands, last and lastb, for printing user login history are explained with examples.
  • tmpfs is a temporary filesystem that resides in the main memory of a Linux system.
  • POSIX shared memory calls are explained with example client-server programs.
  • Shared memory is the fastest mechanism for interprocess communication. System V shared memory calls are explained with example programs.
  • An implementation of queue in C using linked list is provided with program listing and results.
  • Pthread synchronization using mutex and condition variables is explained with an example program.
  • The basic Pthread calls are explained with an example program.
  • POSIX named and unnamed semaphores are explained with example programs.
  • System V semaphore calls are explained with an example program.
  • Semaphores provide a mechanism for synchronizing processes and threads. The basics of semaphores are explained with examples.
  • POSIX message queue calls for interprocess communication (IPC) between processes are explained with example of server and client programs under Linux.
  • Processes running on a Linux system can exchange information using System V message queues. The system calls for System V message queues are explained along with example client-server programs.
  • Resolution of the debug connection between host and Android device after getting the ADB log error AdbCommandRejectedException getting properties for device.
  • Any two processes can communicate using FIFOs in Linux. Interprocess communication using FIFOs is explained using a client-server example.
  • The pipe is a fundamental interprocess communication mechanism in Linux. Interprocess communication using pipes is explained with an example.
  • The relationship between fork and exec system calls and how to use them is explained with an example.
  • Git commands for common source code management use cases are listed.
  • How to import an Android Studio project under Git is explained.
  • The cut command cuts sections of each line of input files and writes it on its standard output. It is mostly used for taking out a few columns from the input files.
  • bash idioms are tiny scripts, mostly one-liners, that accomplish a lot and can be used as building blocks in bigger scripts.
  • The tr command is a filter which reads the standard input, translates or deletes characters and writes the resulting text on its standard output.
  • The comm command compares two sorted files and gives output in three columns, containing the lines present only in the first file, the lines present only in the second file and the lines common to both files respectively.
  • The uniq command is a filter for finding unique lines in input. It reads input, suppresses duplicates and prints unique lines in its output.
  • The sort command is for sorting lines in text files. There are options to define keys, reverse sort order, numeric sort and to merge files.
  • awk is a filter which takes the input and gives output after matching desired patterns and doing processing linked to matched patterns.
  • sed is a stream editor that applies the commands one by one on each line of input and writes the resulting lines on the standard output.
  • grep is a program for searching a given string pattern in files. It searches files for the pattern and prints the lines that match the pattern.
  • A bash script is a list of commands written to automate some system operations work in Linux. The commands for making a bash script are explained.
  • A command line interface (CLI) shell provides a powerful, precise and flexible way of running commands. bash is the default CLI shell under Linux.
  • tc is a user space program for managing qdiscs for network interfaces. tc is used for configuring traffic control in the kernel.
  • The ss command gives the socket statistics. It gives information about the network connections. ss is a replacement for the netstat command.
  • The umask command is used for setting a mask which is used for managing the permissions of files created by processes during a login session.
  • The ip command is used for IPv4 and IPv6 network configuration. It replaces the previous ifconfig and other commands.
  • htop is a ncurses based program for viewing processes in a system running Linux. htop supports mouse operation, uses color in its output and gives visual indications about processor, memory and swap usage.
  • A major part of Software Configuration Management is a system for controlling the source code revisions. The issues involved in source code revision control are discussed.
  • The vimdiff command runs vim in the diff mode on two, three or four files. It is useful for comparing multiple versions of a file and for editing those versions based on differences.
  • Git is a source code management system. Git simplifies the SCM functions for a project and improves the overall efficiency of the software process.
  • The iptables command under Linux helps us in establishing and configuring a firewall by defining rules for packet filtering. iptables also helps in configuring the Network Address Translation (NAT) table.
  • As we work on multiple computers, we need to transfer the work done from one computer to another. Doing this manually is prone to error. This synchronization can be automated using Unison. Unison provides bidirectional synchronization between two computers.
  • The netstat command in Linux provides network statistics and information about the networking subsystem. It gives information about network connections, routing tables and network interface statistics.
  • Quite often, we wish to connect two computers back to back using an Ethernet LAN cable. It may be because we wish to transfer files between the two computers or because one of these has the Internet access and we wish to have one more access point to the Net. The step by step instructions on how to do this are given here.
  • The Routing Table contains the routes for forwarding the IP packets on each network interface. Commands to view and modify the Routing Table are explained with examples.
  • logrotate is a utility for rotation, compression, removal and mailing of log files. logrotate does this work daily, weekly, monthly, or when the log file becomes bigger than a predefined limit.
  • The sar command under Linux and Unix systems gives the system activity reports. The sar command gives statistics regarding the CPU, I/O, paging, devices, memory, swap space, network, run queue length and load average, interrupts and power management.
  • The pidstat command gives the CPU utilization, I/O statistics, page faults and memory utilization, stack details for processes and threads in Linux systems.
  • The mpstat command in a Linux or Unix system gives the processor related statistics. The command gives the CPU utilization report and the hardware and software interrupts per second for each processor.
  • The iostat command is for getting the CPU and input-output statistics for Linux and Unix systems.
  • Installation and configuration steps for nginx HTTP server and associated packages, apc and PHP-FPM, are given.
  • The vmstat command prints the system virtual memory statistics for Linux and Unix systems. vmstat prints information about system processes, memory, swap, I/O blocks, interrupts and context switches and the CPU activity.
  • The uptime and w commands in Linux and Unix systems help in finding about the system uptime, load average and information about logged in users.
  • Using the free command, the free and used memory in a Linux or Unix system can be found.
  • Load average, reported by commands like top, uptime and w, is explained. Load average is an indication of whether the system resources (mainly the CPU) are adequately available for the processes (system load) that are running, runnable or in uninterruptible sleep states during the previous n minutes.
  • The top command is useful for monitoring a Linux or Unix system continuously for processes that take more system resources like the CPU time and the memory. top periodically updates the display showing the high resource consuming processes at the top.
  • The ps command is for the process status. ps gives the status of currently executing processes on a Linux or Unix system. There are multiple options for the ps command that help in getting the process information of interest.
  • The Coordinated Universal Time (UTC) is the universally accepted time standard. Computers around the world use it as the reference time. It is important that our computers have the correct UTC. The Network Time Protocol (NTP) helps us in synchronizing the time of our computers with the UTC maintained by timeservers.
  • Software systems often need to act based on time. The accuracy and precision of time maintained by a system is important. Alarms, sleep and high resolution timers provide a framework for application programs to carry out time-critical tasks.
  • As a process executes, it takes up the CPU time which is also called the execution time or the processor time. The times system call and the clock library function can be used by the process to find its execution time.
  • Linux systems maintain a system clock which stores the calendar time. The is stored as seconds passed since January 1, 1970 00:00:00 UTC. Applications can query this system clock with the time and gettimeofday system calls. There are functions to convert the system calendar time to the familiar local time string.
  • The hardware real time clock tends to lose or gain time every day by a constant amount. This is called the systematic drift. hwclock program helps in calibrating and adjusting the hardware clock.
  • anacron helps in running shell scripts once in a given number of days. Unlike cron, anacron does not require that the exact time running of the script be fixed a priori. For this reason, anacron is particularly relevant for home computers and laptops, which are not switched on all the time.
  • The cron daemon runs shell scripts for root and other users at predefined times. cron is the standard way for running programs periodically on Linux and other Unix-like systems.
  • Backing up a Drupal site involves copying the site's root directory structure and creating a dump of site's database. The backed up data can be used for restoring the site later on.
  • Moving a Drupal site to another host involves making a copy of site's root directory structure and creating a dump of the site's database. On an another host, the database is restored from the backup and the site's root directory tree structure is installed.
  • The definition and relationship between program, process and threads is explained.
  • Sometimes we need to start a server program during the post-boot system initialization after the network has become available. On Linux systems using the Upstart init daemon, this can be done as described here.
  • Although md5sum checksum file is a text file, it has a special format. If the checksum file is not as per this format, we get the error, md5sum: no properly formatted MD5 checksum lines found.
  • In the X-Window system, the terms client and server are somewhat changed. The X-server running on an host and managing the display is the server, whereas an application running on the same host or some other host on the network is the client. Here, we look at the problem of a local application, the client, using the display, or the X-server, of a remote host.
  • JRobin is a Java port of the famous RRDTool package. While trying to save a graph as a PNG file, we run into problems as the saveAsPNG gives a compile time error. But there is a solution as described, here.
  • There are requirements of watermarking images, especially for protecting the copyright. Watermarking an image is easily done using Gimp.
  • It is easy to make a logo using Gimp, as described here.
  • A newly compiled kernel halted during boot with the message, kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0).
  • While running a GTK+ based C program under Cygwin, the following runtime error is observed, Gtk-WARNING **: cannot open display:
  • A C program was compiled under Cygwin and run directly under Windows. The following runtime error was observed, cygwin1.dll not found
  • Multi-boot systems with Linux and Windows are quite common. It is easy to access Windows partitions under Linux, as described here.
  • rsync is used for synchronizing the source files with the corresponding files at the destination. The source and destination may be on the same host, in which case rsync becomes an advanced copy command. Or, the destination may be a networked remote host. rsync uses the delta encoding technique for copying files; for files existing at the destination, only the differences from the source are transferred.
  • The vi editor has withstood the test of time. There are many reasons for its continued popularity - there are separate edit and insert mode, so the chances of inserting text inadvertently is not there, cut copy and paste are terribly efficient, navigation in large files is fast, and generally everything about vi is fast. vi is especially suited for writing large programs.
  • The make program automates generation of executable files from the program's source files. make compiles only those source files which have changed since the last time the corresponding object files were created. This is a slightly advanced level tutorial.
  • The make program automates generation of executable files from the program's source files. make compiles only those source files which have changed since the last time the corresponding object files were created. This is a beginner level tutorial.
  • A Linux system provides many services to the users, like networking, system logging, secure shell access or the print spooler. Some of these services are started during the system initialization procedure at the boot time. Others are started when certain circumstances or events occur. Similarly these services are terminated at system shutdown or before. The start and stop of these services is controlled by the init scripts.
  • This tutorial looks at the GNU Build System. From an end-user's perspective, it first describes how to build the binary executable of a GNU free and open source software package from the available source code and install it on your system. Then, from a programmer's perspective it looks at the GNU Build System for generating the scripts and makefiles which provide the infrastructure that enables the end user to build and install the GNU software executables.
  • The diff command is used for finding differences between two versions of a set of files organized in a directory structure. patch command is used to generate a newer version of files using the older version of files and the diff output of the differences between the two versions.
  • The major software process models are the waterfall model, the evolutionary model and the spiral model. As per the agile concurrent software process model, the activities of waterfall model are not at all done in a sequence in a project; these activities are done concurrently with varying intensities at all times during the software life cycle of a project.
  • Time is the most important requirement for software development projects. Unfortunately, most of the project estimation efforts aim to reduce the time duration of a project. This has a detrimental effect on the project. On the other hand, if the project timeline is a little relaxed, a better project is conceptualized and a design is made that scales well with new requirements.
  • Scrum is an agile software development methodology. Scrum is an incremental software process model, where a project is divided into smaller sub-projects, with each sub-project aiming to add an increment to the working version of final software.Each sub-project is executed in a four to six week duration sprint.
  • Unix was developed in the late sixties and the seventies at Bell Labs. Unix is one of the most important developments in the history of computer software which has influenced the development of operating systems, software development environments and overall computing in general. Since Unix has been such a great success, its development is a valuable case study in software engineering.