AS I BEGAN TO LOVE MYSELF: CHARLIE CHAPLIN ON HIS 70TH BIRTHDAY

As I began to love myself I found that anguish and emotional suffering are only warning signs that I was living against my own truth. Today, I know, this is AUTHENTICITY.

As I began to love myself I understood how much it can offend somebody as I try to force my desires on this person, even though I knew the time was not right and the person was not ready for it, and even though this person was me. Today I call it RESPECT.

As I began to love myself I stopped craving for a different life, and I could see that everything that surrounded me was inviting me to grow. Today I call it MATURITY.

As I began to love myself I understood that at any circumstance, I am in the right place at the right time, and everything happens at the exactly right moment, so I could be calm. Today I call it SELF-CONFIDENCE.

As I began to love myself I quit steeling my own time, and I stopped designing huge projects for the future. Today, I only do what brings me joy and happiness, things I love to do and that make my heart cheer, and I do them in my own way and in my own rhythm. Today I call it SIMPLICITY.

As I began to love myself I freed myself of anything that is no good for my health – food, people, things, situations, and everything that drew me down and away from myself. At first I called this attitude a healthy egoism.Today I know it is LOVE OF ONESELF.

As I began to love myself I quit trying to always be right, and ever since I was wrong less of the time. Today I discovered that is MODESTY.

As I began to love myself I refused to go on living in the past and worry about the future. Now, I only live for the moment, where EVERYTHING is happening. Today I live each day, day by day, and I call it FULFILLMENT.

As I began to love myself I recognized that my mind can disturb me and it can make me sick. But As I connected it to my heart, my mind became a valuable ally. Today I call this connection WISDOM OF THE HEART.

We no longer need to fear arguments, confrontations or any kind of problems with ourselves or others. Even stars collide, and out of their crashing new worlds are born.

Today I know THAT IS “LIFE“!

Bits – 3 times a week comes true

Back in April, 2014, I was busy working around various activities simultaneously: robotics project, Django development, machine learning course, readings, etc — all in my spare time excluding my 6 hours spent at school on a daily basis.

What I ended up doing is sticking with one project at a time for 3 hours straight, and literally leaving everything else untouched. The result was obvious: no books were read, no courses were followed, no jogging was done and heck! I even couldn’t remember Skype with my parents at least once every two weeks! I wasn’t productive enough, I got wore out by sticking to one task and I’m not motivated to work on other stuffs since I didn’t know which one I should be working on.

That’s when I started Bits, yet another project, but one that might bring an end to all these messy scheduling.

here’s how: 

The idea was simple: you tell Bits you want to read this book three times a week, Skype with your parents twice a month and go jogging two times a week — and Bits get everything scheduled for you.

Image

 

With simplicity in mind, you don’t need to fill out complicated forms or hesitating between if it’s better to repeat a task every Friday or Monday. You tell Bits what you want, and Bits arrange every bits of your life for you.

Using it on a daily basis, I’ve developed achievement mode, statistics report as well as reward mechanism for your first ‘done’s. The better you manage your tasks, the more powerful Bits get. Bits helps me a lot in managing my time in a healthier way, I hope it does the same for you!

 


Get it on Google Play

 

If you find any bugs or have any suggestions, just say ‘hi’ at android.bitsapp@gmail.com.

Firstly, find t…

I’ve spent a good deal of the last 6 months oscillating between (often multiple times in one day) the twin but completely contradictory beliefs that:

There are trillions of people who know hundreds of times more stuff about everything than me, and anything I can do could and should be done infinitely quicker and more effectively by these savvy supermen
I AM PROGRAMMER, KNEEL BEFORE ME AND BEG ME TO WORK WITH YOU
When written out in front of you, it’s pretty clear that both these statements are somewhere between false and insane. It’s equally clear that the true and healthy mindset lies somewhere between them. Unfortunately, it can be hard to remember this.

So true! Hoping for the best!

Best Time to Buy and Sell Stock III

[Best Time to Buy and Sell Stock III]

Say you have an array for which the ith element is the price of a given stock on day i.

Design an algorithm to find the maximum profit. You may complete at most two transactions.

Note:
You may not engage in multiple transactions at the same time (ie, you must sell the stock before you buy again).

[Analysis]

Based on Best Time to Buy and Sell Stock II, I first tried the naive approach to divide the prices in two half, and try to find the max of the sum of the two halves. This works on small test cases but failed on larger ones. Indeed, the complexity was O(n^2).

We can actually do this in O(n). The idea is to scan through the prices forward, and find the maxProfit for day i if we sell on or before day i. We store these in an array (DP). Then, we scan backward and do the same thing. That is, we try to find the maxProfit if we buy on, or after day i.

We just need to find the max sum based on these two arrays. Turns out, we don’t even need to keep the backward one, just the maxProfit will do.

Both scanning cost O(n) so this would be a O(n) in the end.

This solution reminds me of Candy. It seems that these kind of problems which involve solving the same issue from both side can always be solved by scanning forward and scanning backward. Don’t try to do solve these problems in one pass, rather, go forward and backward and solve half of the problem. The big O will give you the same complexity as only one scanning. So the backward scanning is “free”.

class Solution {

    void maxProfitHelper(const vector<int> &prices, vector<int> &maxEndWith) {
        int lowest = prices[0];
        int maxProfit = 0;

        for (int i = 0; i<prices.size(); ++i){

            if (prices[i] < lowest) {
                lowest = prices[i];
            }

            if (prices[i]-lowest > maxProfit){
                maxProfit = prices[i] - lowest;
            }

            maxEndWith.push_back(maxProfit);
        }
    }

public:
    int maxProfit(vector<int> &prices) {
        if (prices.empty()) return 0;

        vector<int> maxEndWith;
        maxProfitHelper(prices, maxEndWith);

        int res = maxEndWith.back();

        int highest = prices.back();
        int maxProfit = 0;

        for (int i=prices.size()-2; i>=0; --i) {
            if (highest - prices[i] > maxProfit)
                maxProfit = highest - prices[i];

            if (prices[i] > highest)
                highest = prices[i];

            if (maxEndWith[i]+maxProfit > res)
                res = maxEndWith[i]+maxProfit;
        }

        return res;
    }
};

Jump Game II

[Jump Game II]

Given an array of non-negative integers, you are initially positioned at the first index of the array.

Each element in the array represents your maximum jump length at that position.

Your goal is to reach the last index in the minimum number of jumps.

For example:
Given array A = [2,3,1,1,4]

The minimum number of jumps to reach the last index is 2. (Jump 1 step from index 0 to 1, then 3 steps to the last index.)

[Analysis]

A naive approach is to use DP and without checking if (i+A[i] > farthest) before running the for loop to check if we’ve found a shorter path. In fact, if this condition is not satisfied, we can never have a shorter path since we can only arrive in positions previously covered in less steps. Only when starting at a position, we are expanding our reachable area, is it possible to update our miniSteps.

Also, we will only update starting from farthest, for the same reason. This brings the total complexity to O(n).

    int jump(int A[], int n) {
        vector<int> miniSteps(n, INT_MAX);
        miniSteps[0] = 0;
        int farthest = 0;

        for (int i=0; i<n; ++i){
            if (i <= farthest) {
                // otherwise at position i, we won't be able to expand our atteinable area
                // thus we are only wasting one more step to get to where we can already reach before.
                if (i+A[i] > farthest) {
                    // we can possibly extend our atteinable land!
                    for (int j=farthest; j<n && j<=i+A[i]; ++j) {
                        if (miniSteps[i] + 1 < miniSteps[j])
                            miniSteps[j] = miniSteps[i]+1;
                    }
                    farthest = i+A[i];
                }
            } else {
                // i > farthest already, destination not atteinable
                break;
            }
        }
        if (miniSteps[n-1] != INT_MAX)
            return miniSteps[n-1];
        return -1;
    }

Jump Game

[Jump Game]

Given an array of non-negative integers, you are initially positioned at the first index of the array.

Each element in the array represents your maximum jump length at that position.

Determine if you are able to reach the last index.

For example:
A = [2,3,1,1,4], return true.

A = [3,2,1,0,4], return false.

[Analysis]

At first it seems like a straightforward DP problem and I did just that and failed at the last test cases which contains 25k elements… Lessons learned, there must be some observations to be made in order to optimize the solution. Let’s optimize it!

Note my DP array canJump. In fact, we know there are maximum n steps that we can make on each given position i, therefore the n steps following the positions i are all possible destinations. That is, note how our DP array is set block by block, this means that if we have canJump[i] = true, not only position i is a possible destination, but every elements before it must also be possible destinations. There is thus no need to set them to true again.

Therefore, at each position, we retain previously farthest position such that canJump[farthest] = true, we check if i+A[i] would bring us even farther, if yes, we start at farthest and update our farthest position. The complexity is thus brought to O(n) instead of O(n^2), because we only set each elements to true once, so in total we are still in O(n).

    bool canJump(int A[], int n) {
        if (n == 0)
            return false;
            
        bool canJump[n];
        
        for (int i=0; i<n; ++i){
            canJump[i] = false;
        }
        
        canJump[0] = true;
        int farthest = 0;
        
        
        for (int i=0; i<n; ++i) {
            if (canJump[i]) {
                if (i+A[i] > farthest) {
                    for (int j=farthest; j<n-1 && j<=i+A[i]; ++j) {
                        canJump[j] = true;
                    }
                    farthest = min(n-1, i+A[i]);
                    if (farthest == n-1)
                        return true;
                } 
            }
        }
        
        return canJump[n-1];
    }

Update:
Turns out there is even no need for the DP part, since we know everything before farthest is attainable, why not just check if i<=farthest? This runs in 20ms instead of 70ms. Not so bad 😛


    bool canJump(int A[], int n) {
        if (n == 0)
            return false;
        if (n == 1)
            return true;

        int farthest = 0;

        for (int i=0; i<n; ++i) {
            if (i <= farthest) {
                if (i+A[i] > farthest) {
                    farthest = min(n-1, i+A[i]);
                    if (farthest == n-1)
                        return true;
                }
            }
        }

        return false;
    }

Path Sum II

[Path Sum II]

Given a binary tree and a sum, find all root-to-leaf paths where each path’s sum equals the given sum.

For example:
Given the below binary tree and sum = 22,

              5
             / \
            4   8
           /   / \
          11  13  4
         /  \    / \
        7    2  5   1

return

[
   [5,4,11,2],
   [5,8,4,5]

]

[Analysis]

Know your DFS!

Don’t forget to pop!

If you do pop, make sure we are always pushing one element on each recursion!

/**
 * Definition for binary tree
 * struct TreeNode {
 *     int val;
 *     TreeNode *left;
 *     TreeNode *right;
 *     TreeNode(int x) : val(x), left(NULL), right(NULL) {}
 * };
 */
class Solution {
private:
    vector<vector > res;
    vector r;

    void pathSumHelper(TreeNode *node, int target) {
        if (!node->left && !node->right && target == node->val) {
            // node is a leaf and it's a solution
            r.push_back(node->val);
            res.push_back(r);
            return;
        }

        // internal node, also makes sure that we always push a node
        r.push_back(node->val);
        int current = target - node->val;

        if (node->right) {
            pathSumHelper(node->right, current);
            r.pop_back();
        }
        if (node->left) {
            pathSumHelper(node->left, current);
            r.pop_back();
        }
    }

public:
    vector<vector > pathSum(TreeNode *root, int sum) {
        res.clear();
        r.clear();

        if (!root)
            return res;

        pathSumHelper(root, sum);

        return res;
    }
};

Candy

[Candy]

There are N children standing in a line. Each child is assigned a rating value.

You are giving candies to these children subjected to the following requirements:

  • Each child must have at least one candy.
  • Children with a higher rating get more candies than their neighbors.

What is the minimum candies you must give?

[Analysis]

To solve this problem in O(n) there are several observations to be made.

  1. when we scan through the ratings, there is constraint on the number of candies of i and i-1 when we are in a ascending position (ratings[i-1] < ratings[i]) but there is no constraints if we are not in such a position.
  2. starting with 1 candy for each child, we should only increase the number of candies to be distributed, but never reduce it, otherwise, we would break some constraints previously calculated

The following solution is inspired by this post, with more detailed comments.


class Solution {
public:
    int candy(vector<int> &ratings) {
        vector<int> candies(ratings.size(), 1);
        
        // scanning through the ratings from left to right while 
        // maintaining the rules for all rising positions
        for (int i=1; i<ratings.size(); ++i) {
            if (ratings[i] > ratings[i-1])
                candies[i] = candies[i-1]+1;
        }
        
        int sum = 0;
        // scanning through the ratings from right to left while 
        // maintaining the rules for all rising positions 
        // (descending positions in the first loop)
        for (int i=ratings.size()-2; i>=0; --i) {
            // we only want to increase numbers, not reduce them. 
            // !(candies[i] >= candies[i+1]+1) gurantees that we will only increase it
            // if we increase a number from right to left, we will never break the right policy
            // previously assured left policy will be maintained if we only increase a number
            if (ratings[i] > ratings[i+1] && !(candies[i] >= candies[i+1]+1))
                candies[i] = candies[i+1]+1;
            sum += candies[i+1];
        }
        sum += candies[0];
        
        return sum;
    }
};

Search in Rotated Sorted Array

[Search in Rotated Sorted Array]

Suppose a sorted array is rotated at some pivot unknown to you beforehand.

(i.e., 0 1 2 4 5 6 7 might become 4 5 6 7 0 1 2).

You are given a target value to search. If found in the array return its index, otherwise return -1.

You may assume no duplicate exists in the array.

[Analysis]

At first I thought of using binary search to find the reset index (the 0 in the above example) first, and then use a binary search on the appropriate section. Here is the code of this implementation. The complexity is still O(logN) as this is basically two binary search. The bSearchIndex function returns the reset index, 0 if not rotated and it should never return -1.


class Solution {
    int bSearchIndex(int A[], int n){
        int start = 0;
        int end = n-1;

        while (start <= end) {
            int mid = start + (end-start)/2;
            if (A[mid] > A[end]) {
                start = mid+1;
            } else if (A[mid] < A[end]) {
                end = mid;
            } else {
                return mid;
            }
        }

        return -1;
    }

    int bSearchTarget(int A[], int n, int resetPoint, int target) {
        int start = (target >= A[resetPoint] && target <= A[n-1]) ? resetPoint : 0;
        int end = start == 0 ? resetPoint-1 : n-1;

        if (resetPoint == 0){
            start = 0;
            end = n-1;
        }

        while (start <= end) {
            int mid = start + (end - start)/2;
            if (A[mid] > target)
                end = mid-1;
            else if (A[mid] < target) {
                start = mid+1;
            } else {
                return mid;
            }
        }
        return -1;
    }

public:

    int search(int A[], int n, int target) {
        vector nums;

        int resetIndex = bSearchIndex(A, n);
        return bSearchTarget(A, n, resetIndex, target);
    }
};

However, we don’t necessarily need to know the reset point as pointed out here since we can always deduce on which side resides the target based on the comparison of A[mid] and A[start]. The code is reproduced below.

int rotated_binary_search(int A[], int N, int key) {
  int L = 0;
  int R = N - 1;

  while (L <= R) {
    // Avoid overflow, same as M=(L+R)/2
    int M = L + ((R - L) / 2);
    if (A[M] == key) return M;

    // the bottom half is sorted
    if (A[L] <= A[M]) {
      if (A[L] <= key && key < A[M])
        R = M - 1;
      else
        L = M + 1;
    }
    // the upper half is sorted
    else {
      if (A[M] < key && key <= A[R])
        L = M + 1;
      else
        R = M - 1;
    }
  }
  return -1;
}

The (A[L] <= A[M]) gives us enough information to know which half of the array is sorted, and if left side is sorted, we can compare key with the upper and lower bound of left side to check if target falls in left side, otherwise, it must fall in right half or doesn’t exist. The function return -1 if L>R.

This is also log(N) since it’s a modified version of binary search. It’s not asymptotically faster but absolutely much easier to write. Though my original version gave me quite a good exercise on writing binary search. 😛

Simplify Path

[Simplify Path]

Given an absolute path for a file (Unix-style), simplify it.

For example,
path = "/home/", => "/home"
path = "/a/./b/../../c/", => "/c"

Corner Cases:

  • Did you consider the case where path = "/../"?
    In this case, you should return "/".
  • Another corner case is the path might contain multiple slashes '/' together, such as "/home//foo/".
    In this case, you should ignore redundant slashes and return "/home/foo".

[Analysis]

The idea is straightforward if ever you think of a stack: pop if we have “..”, nothing if we have “.”, push otherwise.

The interesting part is how to split in c++. There are various ways to do it and the most interesting and most reusable way is to create a separate function to handle this task (I’m jealous of Python at this point :P). It’s a tiny bit slower since we need to go through the path twice, but we are still in O(n) and the gain in reusability wins in my book. Credit to Stackoverflow for the split solution.

vector split(const string &s, char delim) {
    vector elems;
    stringstream ss(s);
    string item;
    // getLine stops at the next delim and put everything before that into item.
    while (getline(ss, item, delim)) {
        elems.push_back(item);
    }
    return elems;
}

string simplifyPath(string path) {
    stack directories;

    vector path_v = split(path, '/');

    string simplePath;

    for (auto s : path_v) {
        if (s.empty()) continue;
        if (s == ".." && !directories.empty())
            directories.pop();
        if (s!= ".." && s != ".")
            directories.push(s);
    }

    while (!directories.empty()){
        simplePath = "/"+directories.top() + simplePath;
        directories.pop();
    }

    if (simplePath.empty())
        simplePath += "/";

    return simplePath;
}