May 30, 2016

Prototyping a LED array based cellphone case

Everyone loves staring at LEDs.

I put it into an instrument on/off switch in a previous job.
Now imagine what I can do with multiple LEDs!  I want to make a beautiful cellphone case that will show different patterns on cellphone event notifications (e.g. incoming call or message, alarm).  I mocked it up with a hand-soldered APA102 arrays on a cardboard.
I am drawing power from the 4.3 V available on my development board.  Imagination is the only limit for the patterns, but a product will require many improvements:
  1. A Bluetooth connection from the cellphone to the firmware, to receive various notifications.
  2. A custom BLE service from the phone to the controller, to map different patterns to predefined events.
  3. A lightning connector to draw power straight from the phone.  The case will also have to provide another lightning port so the user can continue to charge the phone.
I used the Nordic BLE example "proximity" app--which uses the Bluetooth Immediate Alert Service and the Link Loss Service to notify both the phone and the device about the link state change--to show different LED patterns.  Here is the "link established" pattern, pulsing with the confidence of a solid connection with the phone:
And here is the "link lost" pattern, showing distress.

Regardless of whether for the iPhone or Android, the sine-qua-non for this project is a high current power source.  After reading about the MFI (made for iOS) slavery program, I decided I should work with Android instead at least at first.  I am not sure about the lightning connector, but after reading about the USB OTG, I believe it is NOT possible to both charge an Android phone and drive the OTG in host most at the same time--which would mean that the user will have to slide a switch to toggle between charging and supplying current to the LED array.

I COULD get rid of the switch by using an OTG enabled FW inside the case, which will switch itself between the peripheral and a hub role automatically, as shown below.
I do have an ST-nucleo development board, which packs a USB OTG FS peripheral inside an M3.  It can serve the peripheral role without a problem.  But when I need to charge the phone, the case will have to relay power and data to the phone.

Since the phone cannot control the case in this mode, I will just have to put a hub inside the case.  If I still want to light up the LED array even while charging, I could send the notifications over Bluetooth, as shown below:

If the case is capable of a USB hub role anyway and can be controlled over BLE, it might as well always provide the benefit of USB hub feature even when the phone is not charging.
But before introducing all these complications, let's just see some LEDs blinking when I get a phone call.

Minimum viable product using USB to SPI bridge

The APA102 timing requirement is 1 MHz +/- 20%.  If I can drive a USB slave that can drive out correct SPI signals, the APA102 array will be happy.  From a quick search for USB-SPI bridge on digikey, it seems there are roughly 3 viable options: FTDI FT4222H (~$1.43 for a lot order), Micron MCP2210 (~$1.50 for a lot order), SiLab CP2130 ($1.57 for lot order).  CP2130 does not seem to support Android, which I must have.  I've used FTDI's D2XX driver in a previous project, and the J2XX driver should be usable from my Android app.  I thought 12 MHz external crystal is required, but apparently it is not required in the bus powered configuration).  MCP2210 is exposed to the host as a HID device, with a report that can drive out up to 62 bytes per USB command.  An SPI transaction may be up to 0xFFFF bytes long.  Initially, it looked quite unattractive to me because of the requirement for an external 12 MHz oscillator, but the much lower current draw when bus powered configuration won me over (13 mA vs. 50 mA for the FT4222H--when run at the slowest rate of 24 MHz).

I bought an ADM00419 (MCP2210 breakout module) from the Microchip online store, and ported the Microchip's example Android app MCP2210 (which is currently in the ADT project form) into Android Studio 2 to connect to the device on my phone.  The ADT project does NOT import cleanly into the Android Studio--Android Studio could improve in this area.  For example, I manually fixed the minimum SDK version discrepancy among the 3 projects.  The import put the min SDK version should have been imported into the build.gradle files instead of the AndroidManifest.xml.

App crashes on Samsung Galaxy S5, Android 5.1.1

The example app crashed right away after detecting the MCP2210.  Here's the stack trace:

Caused by: java.lang.SecurityException: Admin  does not have android.permission.sec.MDM_APP_MGMT
at android.os.Parcel.readException(Parcel.java:1546)
at android.os.Parcel.readException(Parcel.java:1499)
at android.hardware.usb.IUsbManager$Stub$Proxy.requestDevicePermission(IUsbManager.java:653)
at android.hardware.usb.UsbManager.requestPermission(UsbManager.java:432)
at com.microchip.android.microchipusb.MCP2210.requestUsbPermission(MCP2210.java:161)
at com.microchip.android.mcp2210terminal.MainActivity.onCreate(MainActivity.java:530)

What is going on is that a regular app requires a user permission to access a USB device.  The crash happens when the app requests permission.  According to my brief web search, this problem is specific to the MY phone, with its CURRENT Android version.  So I tried another device, and ran into another problem.

Nexus 4 (LG) on CM13 cannot detect the device

The rooted Nexus 4 did not even see ANY USB device.  With the app USB Host Diagnostics (free on the Play Store), I determined there is a problem with the custom kernel.  I've always known about the Android developers' refusal to support custom ROM, but this is the first time I ran into a problem caused by running a custom ROM.

Since the example app has problems on BOTH my phones, I am somewhat stuck.  I hear that Samsung is rolling out the Android 6.0 update for Galaxy S5, so I will just prototype the control app on Windows until I can get the update.

Google Nexus 5 (LG) detected MCP2210 right away

After wrangling a used Nexus 5 running Android 6.0.1 from a Googler friend, I finally confirmed that Android can connect to the MCP2210.  This means that Android can enumerate a custom USB HID device, and I should be able to enumerate my own device similarly.  From browsing the example MCP2210 terminal Android application example (downloaded from the Microchip), it looks like enumeration and detection callback is all done by Android.  To send some bytes to the device, the Android application ultimately forms a USB OUT request:

public final ByteBuffer sendData(final ByteBuffer data) {
    if (data.capacity() > PACKET_SIZE) { // USB packet size is 64 bytes        return null;
    }
...
    mMcp2210UsbOutRequest.initialize(mMcp2210Connection, mMcp2210EpOut);
    mMcp2210Connection.claimInterface(mMcp2210Interface, true);
    mMcp2210UsbOutRequest.queue(usbCommand, PACKET_SIZE);
...

usbCommand is just a byte buffer: the raw data we want to send, and the OUT request object is not device specific.  Let's back-track to how this interface is initialized

import android.hardware.usb.UsbDeviceConnection;
import android.hardware.usb.UsbEndpoint;

private UsbDeviceConnection mMcp2210Connection;

/** USB request used for queuing data to the OUT USB endpoint. */private final UsbRequest mMcp2210UsbOutRequest = new UsbRequest();

The endpoint is obtained from the device's interface:

if (mMcp2210Interface.getInterfaceClass() == UsbConstants.USB_CLASS_HID) {
    for (int j = 0; j < mMcp2210Interface.getEndpointCount(); j++) {
        if (mMcp2210Interface.getEndpoint(j).getDirection() == UsbConstants.USB_DIR_OUT) {
            mMcp2210EpOut = mMcp2210Interface.getEndpoint(j);// OUT usb endpoint found
        } else { // IN usb endpoint found            mMcp2210EpIn = mMcp2210Interface.getEndpoint(j);
        }
    }
    break;
}

Controlling MCP2210 from Windows

I installed the MCP2210 terminal example Windows application downloaded from the Microchip website.  It ran without a problem, and I wrote about my experience in another blog entry.

Can't use APA102

While waiting for the USB-SPI bridge, I found 2 show-stopper problems with APA102.
  1. Talking to a ME colleague, it seems that shoving the ~5x5 mm  APA into the user facing sides cellphone case would be a problem.
  2. According to this reference, APA102 draws ~1 mA even when the light is completely off.  If I have 24 APA102 in a cellphone case, that is ~20 mA.  Compared to the MCP2210 which draws maximum 0.5 mA (2.5 mA if remote wakeup enabled) current draw in USB suspend state, the 20 mA extra current draw blows the USB specification limit of 2.5 mA for device in suspend state.
So I began looking for an efficient and small LED that I can drive directly from an LED driver chip.


May 20, 2016

IDR: the secret sauce in SWD initialization on ARM Cortex M

There are a lot of Cortex M0 cores out there because it's so cheap and sucks up so little current.  But the flip side is that they have limited capabilities.  So one way to compose a fairly complex system is to control multiple M0-equipped peripherals (like the Nordic BLE chips) from a more powerful master uC (say an M4/7, which should just sleep most of the time).  But this approach engenders additional complexity: the FW update and operational control of the M0 from M4/7.  You could of course run the master uC as a serial interface master and the M0 as a slave.  But this eats into the precious serial interface ports on the master, and especially on the slave, which is already constrained in the number of serial interfaces (remember the cost is an important concern for the M0 uC).

Noting that ALL Cortex uC have SWD pins (they have to be programmed at some point after all), the master uC could control the slave by acting like a simple debugger HW (like the SEGGER JLink or the ST-link) over the SWD lines.  If you have never looked at the SWD at work, you are in for quite a learning experience.  But with patience and the 2 key references from ARM LTD (the ARM Debug Interface Spec v5 and the ARMv6-M Architecture Reference Manual), and example codes like the Open-OCD, you can pull this off.

Recently, I spent a long time figuring out why the SWD interface goes comatose after a hard reset or power cycle of the M0.  The go-to references and the example codes didn't show me that I was doing anything wrong.  It appeared that the M0 was returning status (in the CSW register) with DeviceEn bit = 0, but since I was already following the ARM recommended steps to initialize the SWD interface, I didn't know what else I had to do to make the M0 come alive.

Finally, I sniffed what JLink was doing, and waded through the hundreds of (undocumented) transactions it was doing with M0.  I noticed it was READING from the IDR (identification register).  When I inserted it into my code--at the end of the SWD debug domain power-up, M0 SWD DeviceEn bit was finally set to 1.  In all my searches for the CSW.DeviceEn bit or even the IDR register, I did not find 1 mention of this side effect of reading the IDR.  Perhaps someone can explain this undocumented (as far as I can tell) behavior of the Cortex-M0 debug core.

May 18, 2016

Binary message protocol for custom BLE service

Just can't stop staring at those LED strips

I have yet to meet anyone who doesn't think LEDs are not cool.  I am normally too busy with more serious problems, but I've been dreaming about decorating my home office with LED ceiling lamp that displays pleasant patterns.  The original LED strip that most hobbyists have surely seen by now is driven by the original WS1xxx LED + controller chip, which became a megahit partly because of the SPI-like daisy chained controllability.  When I discovered a follow-on product APA10x, which addresses one of the critical shortcomings of the predecessor--the timing difficulty due to the signal being only SPI-like rather than the real-time--I treated myself to a couple of APA102 LED strips.
I am not sure how I want to arrange the > 100 LED pixels, but I am sure I don't want to do it manually, which means I need a micro-controller that can drive out SPI through DMA.  The uC will be hidden away in my room, so I decided to use my phone as the UI--which means I need a bi-directional BLE (Bluetooth Low Energy--AKA Bluetooth Smart) communication between my uC and phone, as shown in the introduction for the Nordic nRF51DK.
I hate writing Spaghetti code often found in low level FW, and I don't want to run a full-fledged real-time OS on a resource constrained chip like the nRF51, so I ported my favorite real-time FW framework--QPC--to the nRF51DK and wrote about the process here.

NUS: custom BLE service that can be easily modified to my liking

As I wrote in the above Google doc, I did not found any Bluetooth SIG defined GATT service for my LED array ceiling lamp project.  Many BLE projects wind up creating a custom service, and the Nordic's UART service example is a great project to copy from: it offers an IN and an OUT characteristics under a service called NUS.  It keeps things simple: no security provision, encryption or bonding.  The MTU size is the Nordic stack default (23 bytes), which means that the usable maximum message size is 20 bytes.  The example will loop back a UTF-8 string from a phone to the nRF51DK and then back.  My experience of going through the NUS tutorial, and marrying the example with a simple state machine application is also in the above Google doc.

Self-describing binary message protocol

To get even a moderate bandwidth, a binary message protocol is a must.  The QS message protocol is part of the QP framework I like so much, and is adequately described in Miro Samek's book.  Briefly, I find its following features attractive for this project:
  1. Binary
  2. Self-describing message fields.  That is, even for the same message ID (say "COMMAND"), both the number and order of the arguments are changeable and does not have to be hard coded.
  3. Light-weight: the whole library consists of a few functions, and the internal message buffer is loaned to the library by the application--which can decide the buffer size.
    1. The light-weight is possible in part because the message protocol is NOT reliable or secure, which works because the BLE GATT gives you reliability and security.
  4. A message can be fragmented for transports with small MTU size.  Precisely the situation here!
Miro is a stickler for efficient code, so QS has rich filtering capability (conditional evaluation of a few integer comparisons).  This is necessary for a SW tracing library, but not necessarily for a thin message protocol.  Because Miro focuses on C/C++, QS has not yet been ported to Java/Android (or to iOS for that matter).  Ripping out the filtering support from QS C source is straight-forward, so most of the work was in the Java port and integration with the Nordic NUS example Android app.
My code is in my GitHub realtime repository (https://github.com/henrychoi/realtime) Lyle folder. 
For now, I support only basic message types, and the peek/poke memory to the target:

public interface MsgType {
    public static final byte
            EMPTY = 0, /*!< MSG record for cleanly starting a session */
            PEEK_MEM = 1, // Reading target memory is very handy
            POKE_MEM = 2, // Writing target memory is very handy
            PEEK_RES = 3, // Answer to the peek
            APP_SPECIFIC = 4, // Begin application specific messages
            WPAR_S = APP_SPECIFIC, // Just 1 string arg
            WPAR_0 = 10, // Message with no arg
            WPAR_8 = 20, // Message with just 1 byte arg
            WPAR_16 = 30, // Message with just 2 byte arg
            STATE = WPAR_16,
            WPAR_32 = 40; // Message with just 4 byte arg}

You can add your own message IDs (and I plan to add lots more).  To use it, the Nordic's existing UartService class uses the new Msg class:

byte[] Msg_buf = new byte[1024];
Msg msg = new Msg(Msg_buf, (short) Msg_buf.length, this);

The UartService instance owning the message is passed as the last argument to the Msg, because it implements the Msgable interface (I'll introduce in the receive path) to handle message flush and received message from the target.

Msg send path (Android to the target)

Android sender

A message is bracketed with a header that includes the message type.  Between the beginning and the end of a message, supported primitive types can be encoded, as in the example below.
void write(short b) {
    msg.BEGIN(MsgType.STATE);
        msg.I16(b);//Direct the query to all SMs    msg.END();
    msg.FLUSH();//Send the request right away?}


Here, I chose to flush the message out right away, but you can just queue the message and have another thread drain the TX message queue (and I plan to), to increase throughput (at the cost of latency).

The writeOUT() was already provided by the Nordic's example code; I just renamed it to add the characteristic name for clarity.  One unique feature of the QS protocol is detection of message stream restart or corrupted message (and automatic tossing of the interrupted/corrupted message).  The EMPTY message in the MsgType you saw earlier makes the stream reset possible.  The Msg constructor encodes an EMPTY message:

public Msg(byte[] sto, short stoSize, Msgable ifc) {
    priv_.buf = sto;
    priv_.end = stoSize;
    this.ifc = ifc;

    /* produce an empty record to "flush" the Msg trace buffer */    beginRec(MsgType.EMPTY);
    endRec();
}

When the message stream is flushed, the flushTX() interface method of the Msgable is called.

public interface Msgable {
    void flushTX();
    void onTargetMsg(TargetMsg m);
}

An implementation is responsible for draining the TX queue and transferring to the appropriate transport, as in the UartService class example.

static final short NUS_PAYLOAD = 23 - 3;
public void flushTX() {
    while(true) {
        Pair<Short, Short> block = msg.getBlock(NUS_PAYLOAD);
        if (block == null) break;
        byte[] blk = Arrays.copyOfRange(Msg_buf //copy to tail+n-1
                , block.first, block.first + block.second);
        writeOUT(blk);
    }
}
The contract between the message protocol and the transport is the raw byte hand-off to the transport.

Target receiver

The Nordic UART example already supplies the nus_data_handler() callback, which I modified to plumb the received bytes into the message parser (after logging a trace message):

static void nus_data_handler(ble_nus_t * p_nus, uint8_t * p_data, uint16_t length)
{
    QS_BEGIN(TRACE_NUS_DATA, &l_softdevice)
        QS_MEM(p_data, length);
    QS_END()
    MSG_parse(p_data, length);
}


My parser creates a new event and publishes it to all interested (subscribed, in publish-subscribe parlance) active objects.

static void QSpyRecord_processUser(QSpyRecord * const me) {
    uint8_t fmt;
    uint32_t u32;
    NUSEvt* pe = Q_NEW(NUSEvt, NUS_SIG);
    pe->type = me->rec; // NUS message type

    while (me->len > 0) {
        fmt = (uint8_t)QSpyRecord_getUint32(me, 1);  /* get the format byte */

        switch (fmt) {
...
          default:
           QS_BEGIN(TRACE_MSG_ERROR, (void *)0)
            QS_U8(0, MSG_ERROR_UNEXPECTED);
            QS_U8(0, me->rec);
            QS_U8(0, fmt);
           QS_END()
           me->len = -1;
           break;
        }
    }
    QF_PUBLISH(&pe->super, me);
}


My particular handling is of course intimately tied to my FW infrastructure (QPC), but you are free to handle the message in your own way.

Target --> Android path

Target side

In this example, let's say the Android side wants to query the current state of all the active objects in the FW (this is how I like to build up the SW/FW interface).  My active object can then respond to the NUS_SIG sent by the low level code like this example.

 switch(e->sig) {
 case NUS_SIG: {
  const NUSEvt* pe = (const NUSEvt*)e;
  switch(pe->type) {
  case MSG_STATE: // state query
   MSG_BEGIN(MSG_STATE);
   MSG_I16(TABLE_STATE_ACTIVE | AO_TABLE);
   MSG_END();
   break;
  default: break;
  }
 } return Q_HANDLED();


The above code merely stuffs a message into the target's TX queue.  Unlike the Android side, it does not flush the queue right away, but rather waits for an idle time for that to happen.

void QV_onIdle(void) { /* called with interrupts disabled, see NOTE01 */
...
    if (m_nus.conn_handle != BLE_CONN_HANDLE_INVALID
      && m_nus.is_notification_enabled) {
        uint16_t n = BLE_NUS_MAX_DATA_LEN;
        uint8_t* msg_buf = (uint8_t*)MSG_getBlock(&n);
        if (msg_buf) {
            QS_BEGIN(TRACE_MSG_OUT, (void *)0)
               QS_MEM(msg_buf, n);
            QS_END()

            uint32_t err_code = ble_nus_string_send(&m_nus, msg_buf, n);
            Q_ASSERT(err_code == NRF_SUCCESS);
        }
    }


Android receiver

Nordic UART example already provides a couple of callback methods to handle the IN (BLE) characteristic reception.  I just have to hand it off to my own handler.

@Overridepublic void onCharacteristicRead(BluetoothGatt gatt,
                                 BluetoothGattCharacteristic characteristic,
                                 int status) {
    if (status == BluetoothGatt.GATT_SUCCESS
            && IN_CHAR_UUID.equals(characteristic.getUuid())) {
        handleChar(characteristic);
    }
}

@Overridepublic void onCharacteristicChanged(BluetoothGatt gatt,
                                    BluetoothGattCharacteristic characteristic) {
    if (IN_CHAR_UUID.equals(characteristic.getUuid())) {
        handleChar(characteristic);
    }
}

Like the target side, my handler will just delegate to the message parser.

//@brief Handle IN Characteristic notification of NUS service
void handleChar(final BluetoothGattCharacteristic characteristic) {
    byte[] data = characteristic.getValue();
    Log.d(TAG, String.format("Received IN characteristic, %d bytes", data.length));
    msg.parse(data); //Parse may find 0 or more TargetMsg (see onTargetMsg)}

The parser is a simple state machine that moves through the sequential states of the message, until the final FRAME (complete) byte is received, at which time a completed message is handled off to an intermediate level handler:

void process() {
    switch(rec) {
        case MsgType.EMPTY:
        case MsgType.PEEK_MEM:
        case MsgType.POKE_MEM:
            break; //silently ignore        case MsgType.PEEK_RES:
            TargetMsg m = new TargetMsg();
            m.data[0] = getInt();
            m.data[1] = getInt();
            ifc.onTargetMsg(m);
            break;
        default:
            processUser(); break;
    }
}

You've seen this movie before: processUser() method is the last leg of the message infrastructure before it is handed off to the application's own handler.

void processUser() {
    TargetMsg m = new TargetMsg();
    m.typ = rec;

    while (len > 0) {
        byte fmt = getByte();
        switch (fmt) {
...
                default:
                    Log.e(TAG, String.format("********** %d: Unknown format %d",
                            rec, fmt));
                    len = -1;
                    break;
            }
        }
        ifc.onTargetMsg(m);
    }
}

The "ifc" above is the Msgable interface, which UartService implements.

public void onTargetMsg(TargetMsg m) {
    final Intent intent = new Intent(ACTION_PERIPHERAL_MSG);
    intent.putExtra(PHERIPHERAL_MSG, m);
    LocalBroadcastManager.getInstance(this).sendBroadcast(intent);
}

You can see that I now hand the message to the Android Intent/Broadcast infrastructure.  The only missing piece is to conform the TargetMsg a Parcelable interface, so Android can serialize it and ship it off to different Activity/Service/processes.

public class TargetMsg implements Parcelable {
    public byte typ; //See MsgType    public int[] data = new int[2];

    public static final Parcelable.Creator<TargetMsg> CREATOR =
 new Parcelable.Creator<TargetMsg>() {
        @Override        public TargetMsg createFromParcel(Parcel parcel) {
            TargetMsg m = new TargetMsg();
            m.typ = parcel.readByte();
            m.data[0] = parcel.readInt();
            m.data[1] = parcel.readInt();
            return m;
        }

        @Override        public TargetMsg[] newArray(int size) {
            return new TargetMsg[size];
        }
    };

    @Override    public int describeContents() { return 0; }

    @Override    public void writeToParcel(Parcel parcel, int flags) {
        parcel.writeByte(typ);
        parcel.writeInt(data[0]);
        parcel.writeInt(data[1]);
    }

    @Override    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;
        TargetMsg that = (TargetMsg)o;
        return typ == that.typ                && data[0] == that.data[0]
                && data[1] == that.data[1]
                ;
    }

}

Finally, here is an example of what my main activity does with the received message:

...
} else if (action.equals(UartService.ACTION_PERIPHERAL_MSG)) {
    final TargetMsg m = intent.getParcelableExtra(UartService.PHERIPHERAL_MSG);
    runOnUiThread(new Runnable() {
        public void run() {
            try {
                String currentDateTimeString = DateFormat.getTimeInstance().format(new Date());
                String text = Byte.toString(m.typ); //new String(inValue, "UTF-8");
...

One shortcoming of my solution is that the TargetMsg has to handle all possible payloads.  It may be preferable to have different classes specialized for different message IDs.

Licensing

The NUS example derived files fall under the Nordic SDK license you will find in the Nordic SDK v 11.0, while the QS derived sources probably inherit the QPC's dual licensing model.  I myself do not claim any copyright for my contribution.

May 9, 2016

Life lessons: communicate that you are principled

"Principled" is a relative concept.  I thought I was principled, but I probably have some more ways to go.  Like everything I blog, what I write below is 100% true.  I think few people will reveal their failures and shortcomings as I do.  But I am mature and secure enough to own up to myself as a complete person--with strengths as well as weaknesses.

I have my LinkedIn setting to not receive any recruiter pings, but they manage to get through somehow.  A few weeks ago, a Tesla FW recruiter emails me: do I want to interview for a FW position?  I politely decline; I am well off at Jawbone and I haven't been at Jawbone long enough to have run out of learning opportunity, so I am not interested unless I get a significant pay raise, which I believe is not possible (recruiter told me Tesla likes to slightly underpay in return for equity).  Then the recruiter comes back: would I consider a TPM (technical program manager) role, since I have a lot of different experience in the technology sector?  I am surprised, and intrigued, so agree to phone conversation.

Several days later, a FW manager (Michael) calls me and tells me he needs a FW engineer.  I ask about the TPM role, but Michael tells me that's news to him, and he's only interested in my FW skills.  I ask him what he needs, and believe that I can do what he wants.  But I told him what I told the recruiter: I would be burning my bridges at Jawbone, so I would only consider moving only for significant salary raise.  That's the end of the brief phone call with Michael, and I don't hear anything back from Tesla.  When I ask the recruiter what they didn't like about me, he tells me because I put money as a precondition, whereas they are looking for people who buy into Tesla's mission, vision, etc.

Well this is the interesting part, isn't it?  I mean: why do we work for someone else (i.e. non-family or friends)?  For me, it's the paycheck first, and then self-fulfillment, and camaraderie.  Are there really people at Tesla who put the dream of all-electric transportation future above all else?  How about if Tesla stock was at $10/share?  Whatever the case may be, where I believe I rubbed Tesla the wrong way is that:
  1. I didn't include company mission and vision in my list; i.e. I am not a Tesla fan-boy.  In fact, I am not a fan-boy of any kind.
  2. I implied that all of the intangible reasons one works for a someone else--even a hot company like Tesla--have a price (high price in my case).
I own up to both.  It would be an arrogance on Tesla's part to consider its mission/vision to be any more noble than Jawbone's--even if it is the public's darling right now.  The second one is debatable:  Jawbone's last President (Sameer) left after only several months on the job because he "got an opportunity of a lifetime at Google" according to him; does that mean he is not principled?  My mistake was not presenting my thoughts in a more positive light.  Since I am well compensated and my situation is relatively secure at the moment, I was actually only interested in the TPM role; I would have considered leaving Jawbone only for say another $50k more--which would put me in an unheard of salary range for a FW engineer.  Since this was an impossibility, I should have just said I am not interested in the FW position.

I still wonder though: does being a realist mean I am unprincipled?

git rebase

The most common review comment on a request is to "squash the commits together, and to rebase on top of the master".

Simplest rebase: local squash

Before I push my commits, I can always squash my local commits into 1.  Let's say I have 10 local commits to squash into 1.  Then I can begin the interactive rebase with this command:

git rebase -i HEAD~10

The oldest commit in the series should be left at "pick", and the rest should be changed to "s" (for squash).  When the temporary rebase directive file is closed, git will open another editor window, where I can change the comments.  This is the simplest rebase scenario.  When I work with another developer who already has my commits and I STILL want to rebase, the other developer has to jump through some hoops.  In the simple case, where he does not have any local changes:

git pull

Otherwise, git pull --rebase

When all else fails, cherry-pick

In the worst case, he can reset to the current AFTER writing down his commits.

git log --oneline or
git log -l 5 --oneline to see the latest 5 commits, and copy the comits ids of the changes to keep.  Then

git fetch
git reset --hard origin/<branch name>
git pull

to sync to the repo 'as is' and proceed to "git cherry-pick" your changes.

Rebase after cherry-pick


The whole idea, if you end up fixing conflicts with rebase is to get rid of your local merges, so in a worst case scenario, before pushing back to the server you could do:

git rebase -i <FROM_ID>

with <FROM_ID> being the commit id of the change before the point you'd like to start an interactive rebase.