# AS I BEGAN TO LOVE MYSELF: CHARLIE CHAPLIN ON HIS 70TH BIRTHDAY

As I began to love myself I found that anguish and emotional suffering are only warning signs that I was living against my own truth. Today, I know, this is AUTHENTICITY.

As I began to love myself I understood how much it can offend somebody as I try to force my desires on this person, even though I knew the time was not right and the person was not ready for it, and even though this person was me. Today I call it RESPECT.

As I began to love myself I stopped craving for a different life, and I could see that everything that surrounded me was inviting me to grow. Today I call it MATURITY.

As I began to love myself I understood that at any circumstance, I am in the right place at the right time, and everything happens at the exactly right moment, so I could be calm. Today I call it SELF-CONFIDENCE.

As I began to love myself I quit steeling my own time, and I stopped designing huge projects for the future. Today, I only do what brings me joy and happiness, things I love to do and that make my heart cheer, and I do them in my own way and in my own rhythm. Today I call it SIMPLICITY.

As I began to love myself I freed myself of anything that is no good for my health – food, people, things, situations, and everything that drew me down and away from myself. At first I called this attitude a healthy egoism.Today I know it is LOVE OF ONESELF.

As I began to love myself I quit trying to always be right, and ever since I was wrong less of the time. Today I discovered that is MODESTY.

As I began to love myself I refused to go on living in the past and worry about the future. Now, I only live for the moment, where EVERYTHING is happening. Today I live each day, day by day, and I call it FULFILLMENT.

As I began to love myself I recognized that my mind can disturb me and it can make me sick. But As I connected it to my heart, my mind became a valuable ally. Today I call this connection WISDOM OF THE HEART.

We no longer need to fear arguments, confrontations or any kind of problems with ourselves or others. Even stars collide, and out of their crashing new worlds are born.

Today I know THAT IS “LIFE“!

# Bits – 3 times a week comes true

Back in April, 2014, I was busy working around various activities simultaneously: robotics project, Django development, machine learning course, readings, etc — all in my spare time excluding my 6 hours spent at school on a daily basis.

What I ended up doing is sticking with one project at a time for 3 hours straight, and literally leaving everything else untouched. The result was obvious: no books were read, no courses were followed, no jogging was done and heck! I even couldn’t remember Skype with my parents at least once every two weeks! I wasn’t productive enough, I got wore out by sticking to one task and I’m not motivated to work on other stuffs since I didn’t know which one I should be working on.

That’s when I started Bits, yet another project, but one that might bring an end to all these messy scheduling.

here’s how:

The idea was simple: you tell Bits you want to read this book three times a week, Skype with your parents twice a month and go jogging two times a week — and Bits get everything scheduled for you.

With simplicity in mind, you don’t need to fill out complicated forms or hesitating between if it’s better to repeat a task every Friday or Monday. You tell Bits what you want, and Bits arrange every bits of your life for you.

Using it on a daily basis, I’ve developed achievement mode, statistics report as well as reward mechanism for your first ‘done’s. The better you manage your tasks, the more powerful Bits get. Bits helps me a lot in managing my time in a healthier way, I hope it does the same for you!

If you find any bugs or have any suggestions, just say ‘hi’ at android.bitsapp@gmail.com.

# Firstly, find t…

I’ve spent a good deal of the last 6 months oscillating between (often multiple times in one day) the twin but completely contradictory beliefs that:

There are trillions of people who know hundreds of times more stuff about everything than me, and anything I can do could and should be done infinitely quicker and more effectively by these savvy supermen
I AM PROGRAMMER, KNEEL BEFORE ME AND BEG ME TO WORK WITH YOU
When written out in front of you, it’s pretty clear that both these statements are somewhere between false and insane. It’s equally clear that the true and healthy mindset lies somewhere between them. Unfortunately, it can be hard to remember this.

So true! Hoping for the best!

# Best Time to Buy and Sell Stock III

### [Best Time to Buy and Sell Stock III]

Say you have an array for which the ith element is the price of a given stock on day i.

Design an algorithm to find the maximum profit. You may complete at most two transactions.

Note:
You may not engage in multiple transactions at the same time (ie, you must sell the stock before you buy again).

### [Analysis]

Based on Best Time to Buy and Sell Stock II, I first tried the naive approach to divide the prices in two half, and try to find the max of the sum of the two halves. This works on small test cases but failed on larger ones. Indeed, the complexity was $O(n^2)$.

We can actually do this in O(n). The idea is to scan through the prices forward, and find the maxProfit for day i if we sell on or before day i. We store these in an array (DP). Then, we scan backward and do the same thing. That is, we try to find the maxProfit if we buy on, or after day i.

We just need to find the max sum based on these two arrays. Turns out, we don’t even need to keep the backward one, just the maxProfit will do.

Both scanning cost O(n) so this would be a O(n) in the end.

This solution reminds me of Candy. It seems that these kind of problems which involve solving the same issue from both side can always be solved by scanning forward and scanning backward. Don’t try to do solve these problems in one pass, rather, go forward and backward and solve half of the problem. The big O will give you the same complexity as only one scanning. So the backward scanning is “free”.

```class Solution {

void maxProfitHelper(const vector<int> &prices, vector<int> &maxEndWith) {
int lowest = prices[0];
int maxProfit = 0;

for (int i = 0; i<prices.size(); ++i){

if (prices[i] < lowest) {
lowest = prices[i];
}

if (prices[i]-lowest > maxProfit){
maxProfit = prices[i] - lowest;
}

maxEndWith.push_back(maxProfit);
}
}

public:
int maxProfit(vector<int> &prices) {
if (prices.empty()) return 0;

vector<int> maxEndWith;
maxProfitHelper(prices, maxEndWith);

int res = maxEndWith.back();

int highest = prices.back();
int maxProfit = 0;

for (int i=prices.size()-2; i>=0; --i) {
if (highest - prices[i] > maxProfit)
maxProfit = highest - prices[i];

if (prices[i] > highest)
highest = prices[i];

if (maxEndWith[i]+maxProfit > res)
res = maxEndWith[i]+maxProfit;
}

return res;
}
};
```

# Jump Game II

### [Jump Game II]

Given an array of non-negative integers, you are initially positioned at the first index of the array.

Each element in the array represents your maximum jump length at that position.

Your goal is to reach the last index in the minimum number of jumps.

For example:
Given array A = `[2,3,1,1,4]`

The minimum number of jumps to reach the last index is `2`. (Jump `1` step from index 0 to 1, then `3` steps to the last index.)

### [Analysis]

A naive approach is to use DP and without checking if (i+A[i] > farthest) before running the for loop to check if we’ve found a shorter path. In fact, if this condition is not satisfied, we can never have a shorter path since we can only arrive in positions previously covered in less steps. Only when starting at a position, we are expanding our reachable area, is it possible to update our miniSteps.

Also, we will only update starting from farthest, for the same reason. This brings the total complexity to O(n).

```    int jump(int A[], int n) {
vector<int> miniSteps(n, INT_MAX);
miniSteps[0] = 0;
int farthest = 0;

for (int i=0; i<n; ++i){
if (i <= farthest) {
// otherwise at position i, we won't be able to expand our atteinable area
// thus we are only wasting one more step to get to where we can already reach before.
if (i+A[i] > farthest) {
// we can possibly extend our atteinable land!
for (int j=farthest; j<n && j<=i+A[i]; ++j) {
if (miniSteps[i] + 1 < miniSteps[j])
miniSteps[j] = miniSteps[i]+1;
}
farthest = i+A[i];
}
} else {
// i > farthest already, destination not atteinable
break;
}
}
if (miniSteps[n-1] != INT_MAX)
return miniSteps[n-1];
return -1;
}
```

# Jump Game

### [Jump Game]

Given an array of non-negative integers, you are initially positioned at the first index of the array.

Each element in the array represents your maximum jump length at that position.

Determine if you are able to reach the last index.

For example:
A = `[2,3,1,1,4]`, return `true`.

A = `[3,2,1,0,4]`, return `false`.

### [Analysis]

At first it seems like a straightforward DP problem and I did just that and failed at the last test cases which contains 25k elements… Lessons learned, there must be some observations to be made in order to optimize the solution. Let’s optimize it!

Note my DP array canJump. In fact, we know there are maximum n steps that we can make on each given position i, therefore the n steps following the positions i are all possible destinations. That is, note how our DP array is set block by block, this means that if we have canJump[i] = true, not only position i is a possible destination, but every elements before it must also be possible destinations. There is thus no need to set them to true again.

Therefore, at each position, we retain previously farthest position such that canJump[farthest] = true, we check if i+A[i] would bring us even farther, if yes, we start at farthest and update our farthest position. The complexity is thus brought to O(n) instead of $O(n^2)$, because we only set each elements to true once, so in total we are still in O(n).

```    bool canJump(int A[], int n) {
if (n == 0)
return false;

bool canJump[n];

for (int i=0; i<n; ++i){
canJump[i] = false;
}

canJump[0] = true;
int farthest = 0;

for (int i=0; i<n; ++i) {
if (canJump[i]) {
if (i+A[i] > farthest) {
for (int j=farthest; j<n-1 && j<=i+A[i]; ++j) {
canJump[j] = true;
}
farthest = min(n-1, i+A[i]);
if (farthest == n-1)
return true;
}
}
}

return canJump[n-1];
}
```

Update:
Turns out there is even no need for the DP part, since we know everything before farthest is attainable, why not just check if i<=farthest? This runs in 20ms instead of 70ms. Not so bad 😛

```
bool canJump(int A[], int n) {
if (n == 0)
return false;
if (n == 1)
return true;

int farthest = 0;

for (int i=0; i<n; ++i) {
if (i <= farthest) {
if (i+A[i] > farthest) {
farthest = min(n-1, i+A[i]);
if (farthest == n-1)
return true;
}
}
}

return false;
}

```

# Path Sum II

### [Path Sum II]

Given a binary tree and a sum, find all root-to-leaf paths where each path’s sum equals the given sum.

For example:
Given the below binary tree and `sum = 22`,

```              5
/ \
4   8
/   / \
11  13  4
/  \    / \
7    2  5   1```

return

```[
[5,4,11,2],
[5,8,4,5]

]```

### [Analysis]

Don’t forget to pop!

If you do pop, make sure we are always pushing one element on each recursion!

```/**
* Definition for binary tree
* struct TreeNode {
*     int val;
*     TreeNode *left;
*     TreeNode *right;
*     TreeNode(int x) : val(x), left(NULL), right(NULL) {}
* };
*/
class Solution {
private:
vector<vector > res;
vector r;

void pathSumHelper(TreeNode *node, int target) {
if (!node->left && !node->right && target == node->val) {
// node is a leaf and it's a solution
r.push_back(node->val);
res.push_back(r);
return;
}

// internal node, also makes sure that we always push a node
r.push_back(node->val);
int current = target - node->val;

if (node->right) {
pathSumHelper(node->right, current);
r.pop_back();
}
if (node->left) {
pathSumHelper(node->left, current);
r.pop_back();
}
}

public:
vector<vector > pathSum(TreeNode *root, int sum) {
res.clear();
r.clear();

if (!root)
return res;

pathSumHelper(root, sum);

return res;
}
};

```