Skip to content

Latest commit

 

History

History
96 lines (58 loc) · 26.5 KB

File metadata and controls

96 lines (58 loc) · 26.5 KB

Title: General Programming Skills

Description: A path to become a better programmer.

Links

Sections:

  1. Programming Languages
  2. Algorithms
  3. Data Structures
  4. Design Patterns
  5. Software Engineering
  6. Computer Science

Programming Languages

There are various programming languages that you can learn. They can all be used to solve problems in different ways. It is important to learn more than one programming language. Here are some programming languages and what they can be used for:

  • C++: Used for competitive programming, game development, and system programming.
  • C#: Used for game development, web development, and system programming.
  • Java: Used for Android development, web development, and system programming.
  • JavaScript: Used for web development, game development, and system programming.
  • Python: Used for data science, machine learning, and system programming.
  • Rust: Used for system programming, game development, and web development.
  • Swift: Used for iOS development, game development, and system programming.
  • TypeScript: Used for web development, game development, and system programming.
  • Go: Used for system programming, game development, and web development.
  • Kotlin: Used for Android development, game development, and system programming.
  • PHP: Used for web development, game development, and system programming.
  • Ruby: Used for web development, game development, and system programming.

Algorithms

Algorithms are a set of steps to solve a problem. They are used in programming to solve problems. Here are some algorithms that you can learn:

  • Binary Search: Binary search is a search algorithm that finds the position of a target value within a sorted array. It is a divide and conquer algorithm. It works by comparing the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array. Binary search runs in logarithmic time in the worst case, making O(log n) comparisons, where n is the number of elements in the array, and therefore is much faster than linear search, which would need O(n) comparisons, where n is the number of elements in the array. However, it is important to note that binary search only works on sorted arrays. If the array is not sorted, the results are undefined.

  • Bubble Sort: Bubble sort is a simple sorting algorithm that repeatedly steps through the list to be sorted, compares each pair of adjacent items and swaps them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted. The algorithm, which is a comparison sort, is named for the way smaller or larger elements "bubble" to the top of the list. Although the algorithm is simple, it is too slow and impractical for most problems even when compared to insertion sort. It can be practical if the input is usually in sort order but may occasionally have some out-of-order elements nearly in position.

  • Insertion Sort: Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. However, insertion sort provides several advantages: Simple implementation, Efficient for (quite) small data sets, More efficient in practice than most other simple quadratic (i.e., O(n2)) algorithms such as selection sort or bubble sort, Adaptive, i.e., efficient for data sets that are already substantially sorted: the time complexity is O(kn) when each element in the input is no more than k places away from its sorted position, where k is the number of inversions. Stable; i.e., does not change the relative order of elements with equal keys. In-place; i.e., only requires a constant amount O(1) of additional memory space. Online; i.e., can sort a list as it receives it.

  • Linear Search: Linear search is a very simple search algorithm. A sequential search or linear search is a method for finding an element within a list. It sequentially checks each element of the list until a match is found or the whole list has been searched. Linear search runs in at worst linear time and makes at most n comparisons, where n is the length of the list. If each element is equally likely to be searched, then linear search has an average case of O(n/2) comparisons, but the best-case (when the first element is searched) is O(1). Linear search is rarely practical because other search algorithms and schemes, such as the binary search algorithm and hash tables, allow significantly faster searching for all but short lists.

  • Divide and Conquer: Divide and conquer is an algorithm design paradigm based on multi-branched recursion. A divide-and-conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem. The technique of divide and conquer is applicable to many types of problems, and is one of the most powerful techniques for designing efficient algorithms. For example, merge sort and quicksort are divide-and-conquer algorithms. The idea of divide and conquer is simple: Divide the given problem into sub-problems of same type, Solve the sub-problems recursively, and Combine the solutions to sub-problems to get the solution of the given problem.

  • Merge Sort: Merge sort is an efficient, general-purpose, comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the implementation preserves the input order of equal elements in the sorted output. Merge sort is a divide and conquer algorithm that was invented by John von Neumann in 1945. A detailed description and analysis of bottom-up mergesort appeared in a report by Goldstine and von Neumann as early as 1948. A good implementation of merge sort requires that the input array be copied only once, and that the auxiliary array be of length at most n log n, where n is the length of the input array. Merge sort is a comparison sort, meaning that it can sort items of any type for which a "less-than" relation is defined. In the worst case, this algorithm performs O(n log n) comparisons. Merge sort is often preferred for sorting linked lists. The slow random-access performance of a linked list makes some other algorithms (such as quicksort) perform poorly, and others (such as heapsort) completely impossible.

  • Quicksort: Quicksort is a divide and conquer algorithm. Quicksort first divides a large array into two smaller sub-arrays: the low elements and the high elements. Quicksort can then recursively sort the sub-arrays. The steps are:

    1. Pick an element, called a pivot, from the array.
    2. Partitioning: reorder the array so that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is in its final position. This is called the partition operation.
    3. Recursively apply the above steps to the sub-array of elements with smaller values and separately to the sub-array of elements with greater values.
  • Selection Sort: Selection sort is a sorting algorithm, specifically an in-place comparison sort. It has O(n2) time complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity, and it has performance advantages over more complicated algorithms in certain situations, particularly where auxiliary memory is limited. The algorithm divides the input list into two parts: the sublist of items already sorted, which is built up from left to right at the front (left) of the list, and the sublist of items remaining to be sorted that occupy the rest of the list. Initially, the sorted sublist is empty and the unsorted sublist is the entire input list. The algorithm proceeds by finding the smallest (or largest, depending on sorting order) element in the unsorted sublist, exchanging (swapping) it with the leftmost unsorted element (putting it in sorted order), and moving the sublist boundaries one element to the right.

Data Structures

Data structures are a way to store and organize data so that it can be accessed and modified efficiently. There are many different types of data structures, each with their own advantages and disadvantages. Some are highly specialized, while others (like arrays) are more generally used. The most important thing to remember is that choosing the right data structure for a particular problem can greatly improve the efficiency of your code.

  • Hash Table: A hash table is a data structure that implements an associative array abstract data type, a structure that can map keys to values. A hash table uses a hash function to compute an index into an array of buckets or slots, from which the desired value can be found. During lookup, the key is hashed and the resulting hash indicates where the corresponding value is stored. Hash tables are used in many kinds of computer software, particularly for associative arrays, database indexing, caches, and sets. Hash tables are designed to be efficient in both time and space requirements for most applications. They are also known by names including hash, hash map, hashed array tree, or simply hash table. Hash tables are a common way to implement associative arrays, database indexes, caches, sets, and many other data structures. Hash tables are used in many programming languages, libraries, and operating systems. Hash tables are often used to implement associative arrays, symbol tables, and databases. Hash tables are also used in many other contexts, such as in caches, sets, and many other data structures. Hash tables are a common way to implement associative arrays, database indexes, caches, sets, and many other data structures. Hash tables are used in many programming languages, libraries, and operating systems. Hash tables are often used to implement associative arrays, symbol tables, and databases. Hash tables are also used in many other contexts, such as in caches, sets, and many other data structures.

  • Linked List: A linked list is a linear collection of data elements, whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a data structure consisting of a group of nodes which together represent a sequence. Under the simplest form, each node is composed of data and a reference (in other words, a link) to the next node in the sequence. This structure allows for efficient insertion or removal of elements from any position in the sequence during iteration. More complex variants add additional links, allowing more efficient insertion or removal of nodes at arbitrary positions. A drawback of linked lists is that access time is linear (and difficult to pipeline). Faster access, such as random access, is not feasible. Arrays have better cache locality as compared to linked lists.

  • Array: An array is a data structure consisting of a collection of elements (values or variables), each identified by at least one array index or key. An array is stored such that the position of each element can be computed from its index tuple by a mathematical formula. The simplest type of data structure is a linear array, also called one-dimensional array. In a linear array, the array index is a single integer. Linear arrays are often used to implement simple lookup tables and other tables that require random access. Arrays are also used to implement other data structures, such as lists, stacks, queues, and associative arrays. Arrays are also used to implement other data structures, such as lists, stacks, queues, and associative arrays.

  • Stack: A stack is an abstract data type that serves as a collection of elements, with two principal operations: push, which adds an element to the collection, and pop, which removes the most recently added element that was not yet removed. The order in which elements come off a stack gives rise to its alternative name, LIFO (last in, first out). Additionally, a peek operation may give access to the top without modifying the stack. The name "stack" for this type of structure comes from the analogy to a set of physical items stacked on top of each other, which makes it easy to take an item off the top of the stack, while getting to an item deeper in the stack may require taking off multiple other items first. A more precise definition of stack is that it is an ordered list of items that follows the LIFO principle. The addition of an item or the removal of an item from the stack is known as push and pop, respectively. A stack is a last-in-first-out (LIFO) data structure. The basic push and pop operations occur only at the top of the stack, referred to as the top. A more precise definition of stack is that it is an ordered list of items that follows the LIFO principle. The addition of an item or the removal of an item from the stack is known as push and pop, respectively. A stack is a last-in-first-out (LIFO) data structure. The basic push and pop operations occur only at the top of the stack, referred to as the top.

  • Queue: A queue is a particular kind of abstract data type or collection in which the entities in the collection are kept in order and the principal (or only) operations on the collection are the addition of entities to the rear terminal position, known as enqueue, and removal of entities from the front terminal position, known as dequeue. This makes the queue a First-In-First-Out (FIFO) data structure. In a FIFO data structure, the first element added to the queue will be the first one to be removed. This is equivalent to the requirement that once a new element is added, all elements that were added before have to be removed before the new element can be removed. Often a peek or front operation is also entered, returning the value of the front element without dequeuing it. A queue is a first-in-first-out (FIFO) data structure. The basic enqueue and dequeue operations occur only at the ends of the queue, referred to as the front and rear. A more precise definition of queue is that it is an ordered list of items that follows the FIFO principle. The addition of an item or the removal of an item from the queue is known as enqueue and dequeue, respectively. A queue is a first-in-first-out (FIFO) data structure. The basic enqueue and dequeue operations occur only at the ends of the queue, referred to as the front and rear. A more precise definition of queue is that it is an ordered list of items that follows the FIFO principle. The addition of an item or the removal of an item from the queue is known as enqueue and dequeue, respectively.

  • Tree: A tree is a widely used abstract data type (ADT)—or data structure implementing this ADT—that simulates a hierarchical tree structure, with a root value and subtrees of children with a parent node, represented as a set of linked nodes. A tree data structure can be defined recursively (locally) as a collection of nodes (starting at a root node), where each node is a data structure consisting of a value, together with a list of references to nodes (the "children"), with the constraints that no reference is duplicated, and none points to the root. A tree data structure can be defined recursively (locally) as a collection of nodes (starting at a root node), where each node is a data structure consisting of a value, together with a list of references to nodes (the "children"), with the constraints that no reference is duplicated, and none points to the root. A tree data structure can be defined recursively (locally) as a collection of nodes (starting at a root node), where each node is a data structure consisting of a value, together with a list of references to nodes (the "children"), with the constraints that no reference is duplicated, and none points to the root.

  • Graph: A graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from mathematics; specifically, the field of graph theory. A graph data structure consists of a finite (and possibly mutable) set of vertices or nodes or points, together with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for a directed graph. These pairs are known as edges, arcs, or lines for an undirected graph and as arrows, directed edges, directed arcs, or directed lines for a directed graph. The vertices may be part of the graph structure, or may be external entities represented by integer indices or references. A graph data structure consists of a finite (and possibly mutable) set of vertices or nodes or points, together with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for a directed graph. These pairs are known as edges, arcs, or lines for an undirected graph and as arrows, directed edges, directed arcs, or directed lines for a directed graph. The vertices may be part of the graph structure, or may be external entities represented by integer indices or references. A graph data structure consists of a finite (and possibly mutable) set of vertices or nodes or points, together with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for a directed graph. These pairs are known as edges, arcs, or lines for an undirected graph and as arrows, directed edges, directed arcs, or directed lines for a directed graph. The vertices may be part of the graph structure, or may be external entities represented by integer indices or references.

  • Heap: A heap is a specialized tree-based data structure that satisfies the heap property: if P is a parent node of C, then the key (the value) of P is either greater than or equal to (in a max heap) or less than or equal to (in a min heap) the key of C. The node at the "top" of the heap (with no parents) is called the root node. A heap is a specialized tree-based data structure that satisfies the heap property: if P is a parent node of C, then the key (the value) of P is either greater than or equal to (in a max heap) or less than or equal to (in a min heap) the key of C. The node at the "top" of the heap (with no parents) is called the root node. A heap is a specialized tree-based data structure that satisfies the heap property: if P is a parent node of C, then the key (the value) of P is either greater than or equal to (in a max heap) or less than or equal to (in a min heap) the key of C. The node at the "top" of the heap (with no parents) is called the root node.

  • Hash Table: A hash table is a data structure that implements an associative array abstract data type, a structure that can map keys to values. A hash table uses a hash function to compute an index into an array of buckets or slots, from which the desired value can be found. During lookup, the key is hashed and the resulting hash indicates where the corresponding value is stored. A hash table uses a hash function to compute an index into an array of buckets or slots, from which the desired value can be found. During lookup, the key is hashed and the resulting hash indicates where the corresponding value is stored. A hash table uses a hash function to compute an index into an array of buckets or slots, from which the desired value can be found. During lookup, the key is hashed and the resulting hash indicates where the corresponding value is stored.

  • Trie: A trie, also called digital tree and sometimes radix tree or prefix tree (as they can be searched by prefixes), is an ordered tree data structure that is used to store a dynamic set or associative array where the keys are usually strings. Unlike a binary search tree, no node in the tree stores the key associated with that node; instead, its position in the tree defines the key with which it is associated. All the descendants of a node have a common prefix of the string associated with that node, and the root is associated with the empty string. Values are not necessarily associated with every node. Rather, values tend only to be associated with leaves, and with some inner nodes that correspond to keys of interest. For the space-optimized presentation of prefix tree, see compact prefix tree. A trie is a kind of search tree—an ordered tree data structure that is used to store a dynamic set or associative array where the keys are usually strings. Unlike a binary search tree, no node in the tree stores the key associated with that node; instead, its position in the tree defines the key with which it is associated. All the descendants of a node have a common prefix of the string associated with that node, and the root is associated with the empty string. Values are not necessarily associated with every node. Rather, values tend only to be associated with leaves, and with some inner nodes that correspond to keys of interest. For the space-optimized presentation of prefix tree, see compact prefix tree. A trie is a kind of search tree—an ordered tree data structure that is used to store a dynamic set or associative array where the keys are usually strings. Unlike a binary search tree, no node in the tree stores the key associated with that node; instead, its position in the tree defines the key with which it is associated. All the descendants of a node have a common prefix of the string associated with that node, and the root is associated with the empty string. Values are not necessarily associated with every node. Rather, values tend only to be associated with leaves, and with some inner nodes that correspond to keys of interest. For the space-optimized presentation of prefix tree, see compact prefix tree.

  • Priority Queue: A priority queue is an abstract data type which is like a regular queue or stack data structure, but where additionally each element has a "priority" associated with it. In a priority queue, an element with high priority is served before an element with low priority. If two elements have the same priority, they are served according to their order in the queue. A typical priority queue supports the following operations: insert an item with a given priority, extract the highest-priority item, change the priority of an item, and delete an item. A typical priority queue supports the following operations: insert an item with a given priority, extract the highest-priority item, change the priority of an item, and delete an item. A typical priority queue supports the following operations: insert an item with a given priority, extract the highest-priority item, change the priority of an item, and delete an item.

  • Disjoint Set: A disjoint-set data structure is a data structure that keeps track of a set of elements partitioned into a number of disjoint (non-overlapping) subsets. A union-find algorithm is an algorithm that performs two useful operations on such a data structure: Find: Determine which subset a particular element is in. This can be used for determining if two elements are in the same subset. Union: Join two subsets into a single subset. A union-find algorithm is an algorithm that performs two useful operations on such a data structure: Find: Determine which subset a particular element is in. This can be used for determining if two elements are in the same subset. Union: Join two subsets into a single subset. A union-find algorithm is an algorithm that performs two useful operations on such a data structure: Find: Determine which subset a particular element is in. This can be used for determining if two elements are in the same subset. Union: Join two subsets into a single subset.

Design Patterns

Design patterns are typical solutions to common problems in software design. Each pattern is like a blueprint that you can customize to solve a particular design problem in your code. Patterns are already defined and provides industry standard approach to solve a recurring problem, so it saves time if we sensibly use the design pattern. Using design patterns promotes re-usability that leads to more robust and highly maintainable code. It is important to note that design patterns are not a finished design that can be transformed directly into code. It is a description or template for how to solve a problem that can be used in many different situations. Design patterns are already defined and provides industry standard approach to solve a recurring problem, so it saves time if we sensibly use the design pattern. Using design patterns promotes re-usability that leads to more robust and highly maintainable code. It is important to note that design patterns are not a finished design that can be transformed directly into code. It is a description or template for how to solve a problem that can be used in many different situations.

  • Creational Patterns: These design patterns provide a way to create objects while hiding the creation logic, rather than instantiating objects directly using new operator. This gives program more flexibility in deciding which objects need to be created for a given use case. These design patterns provide a way to create objects while hiding the creation logic, rather than instantiating objects directly using new operator. This gives program more flexibility in deciding which objects need to be created for a given use case. These design patterns provide a way to create objects while hiding the creation logic, rather than instantiating objects directly using new operator. This gives program more flexibility in deciding which objects need to be created for a given use case.

  • Structural Patterns: These design patterns concern class and object composition. Concept of inheritance is used to compose interfaces and define ways to compose objects to obtain new functionalities. These design patterns concern class and object composition. Concept of inheritance is used to compose interfaces and define ways to compose objects to obtain new functionalities. These design patterns concern class and object composition. Concept of inheritance is used to compose interfaces and define ways to compose objects to obtain new functionalities.

  • Behavioral Patterns: These design patterns are specifically concerned with communication between objects. These design patterns are specifically concerned with communication between objects. These design patterns are specifically concerned with communication between objects.