Jan 21, 2015

Understanding the Zynq ADC Linux driver

After successfully reading out the temperature from a Honeywell pressure-temperature sensor HSCDAND030PGSA3 over SPI in a previous blog entry, I tried to think of a natural application of a protocol SPI driver for this sensor, but could not come up with a convincing example outside a customized HW control--definitely not the typical desktop/mobile Linux usage.  I am interested in such problems, and I plan to spend much time exploring this niche.  But a customized Linux based controller requires a lot of real-time framework, which will probably require a non-stock Linux kernel that I've been running on the Zedboard till now.  So the next best usage I could think of is in a thermal control application.  Since modern mobile platforms are powerful BUT lack a fan, thermal management is important.  Many SoC can measure CPU temperature and regulate the CPU frequency or idle time, but perhaps such management algorithms can be improved with an ambient temperature sensor?  Fundamentally, any downstream usage of the CPU temperature starts with measuring the temperature, so let's start with the HW.

Current status (so you don't have to read the whole thing)

Able to read the XADC raw readings, scale, and offset from user program through the Linux IIO (Linux for industial I/O) framework.  Working on understanding the IIO trigger framework.

On-chip temperature sensor on Zynq XADC peripheral

Then the first step in integrating in an external sensor into the Linux thermal control framework would be to understand how the CPU temperature reading is used within the current thermal control framework.  Zynq XADC measures the on-chip T to +/- 4C with a 12-bit ADC, and has alarm/interrupt feature.
To understand this HW, I first watched this video and this, and read the project stand-alone BSP's documentation on the xadcps module, and then Xilinx document ug480, which said:
  • [In addition to the DRP (dynamic reconfiguration port), ] ADC conversion data is also accessible through the JTAG TAP.
  • For JTAG TAP, users are not required to instantiate the XADC because it is a dedicated interface that uses the existing FPGA JTAG infrastructure.
  • if the XADC is not instantiated in a design, the device operates in a predefined mode (called default mode) that monitors on-chip temperature and supply voltages.

  • I found it strange that while the XADC HW is 12 bit, the BSP exposes a 10-bit data...

    XADC Linux driver

    It turns out that Xilinx already supplies the device driver for XADC (static struct platform_driver xadc_driver), in <kernel>/drivers/iio/adc/xilinx-xadc-core (ignore xlinix-xadc.c, it does not exist in vmlinux).c, with a matching DTS node already in zynq.dtsi:

    xadc@f8007100 {
            compatible = "xlnx,zynq-xadc-1.00.a", "xlnx,ps7-xadc-1.00.a";
            reg = <0xf8007100 0x20>;
            interrupts = <0 7 4>;
            interrupt-parent = <&gic>;
            clocks = <&clkc 12>;
    };

    It was confusing that there were 2 files that had the same symbol (probe and remove).  Clearly, one was newer than the other, but why not remove the other one?  Finding the version that IS pulled into the kernel was easy: list the symbol in the cross-debugger.  But finding HOW Kbuild is pulling that file involved a detour through the Kbuild documentation.  The (somewhat less than satisfying) answer was in <kernel>/Documentation/kbuild/modules.txt, section 3.4: Building Multiple Modules, where this example is given:

    obj-m := foo.o bar.o
    foo-y := <foo_srcs>
    bar-y := <bar_srcs>

    This pattern matches what I found in the <>/drivers/iio/adc/Makefile, where

    obj-$(CONFIG_XILINX_XADC) += xilinx-xadc.o

    does NOT mean "pull-in xilinx-xadc.c's object file", but rather, "pull in the module xilinx-xadc", whose dependency is specified in the line above that

    xilinx-xadc-y := xilinx-xadc-core.o xilinx-xadc-events.o

    All of this was required to support out-of-the-tree module, which is helpful when first writing the device driver, I suppose.

    The Xilinx supplied device driver can control the XADC directly from the CPU, or through the AXI "bus"--both through memory mapped registers, and declares that flexibility in the match table:

    static const struct of_device_id xadc_of_match_table[] = {
    { .compatible = "xlnx,ps7-xadc-1.00.a", (void *)&xadc_ps7_ops },
    { .compatible = "xlnx,axi-xadc-1.00.a", (void *)&xadc_axi_ops },
    { },
    };

    In SPI master device driver case explored in a previous blog entry, I saw that the Linux spi_master device driver framework did much of the heavy lifting.  The same is probably true for an IIO device driver framework, which begs the question: what is an IIO device?  I can see that on Zedboard, the device driver already populated data it knows about in sysfs:

    # ls -lh  /sys/bus/iio/devices/
    iio:device0 -> ../../../devices/amba.0/f8007100.xadc/iio:device0
    # ls /sys/bus/iio/devices/iio\:device0/
    dev                        in_voltage2_vccbram_raw    in_voltage6_vrefp_scale
    events                     in_voltage2_vccbram_scale  in_voltage7_vrefn_raw
    in_temp0_offset            in_voltage3_vccpint_raw    in_voltage7_vrefn_scale
    in_temp0_raw               in_voltage3_vccpint_scale  name
    in_temp0_scale             in_voltage4_vccpaux_raw    power
    in_voltage0_vccint_raw     in_voltage4_vccpaux_scale  sampling_frequency
    in_voltage0_vccint_scale   in_voltage5_vccoddr_raw    subsystem
    in_voltage1_vccaux_raw     in_voltage5_vccoddr_scale  uevent
    in_voltage1_vccaux_scale   in_voltage6_vrefp_raw
    # cat /sys/bus/iio/devices/iio\:device0/sampling_frequency 
    961538
    # cat /sys/bus/iio/devices/iio\:device0/in_temp0_raw 
    2551
    # cat /sys/bus/iio/devices/iio\:device0/in_temp0_scale 
    123.040771484
    # cat /sys/bus/iio/devices/iio\:device0/in_temp0_offset 
    -2219

    Applying (raw - offset) / scale yields 38.8 C, which is a reasonable dye temperature, but this formula is different than the one given in the HW manual: Xilinx document ug480, Equation 1-2: T [C] = ADC * 503.975 / 4096 - 273.15.  Let's find out how the scale and offset were set in the device driver.

    XADC is a IIO device driver, but also a platform driver

    As described by Maxim Ripard of Free-Electrons, IIO bus is a subsystem for ADC/DAC devices.  iio_dev is a device struct, and holds the driver specialized device specific private data (as opaque memory of requested size).  An ADC device apparently sometimes have 2 optional HW features: HW triggering and data buffering in HW.  Let's read the XADC device driver written by Lars-Peter Clausen to learn about the IIO framework.

    Reading/writing XADC registers through JTAG

    Writing

    XADC register access has 2 versions: registers are documented TRM, are accessed directly through memory mapped access from PS in xadc_read/write_reg, while the HW registers accessed through JTAG DRP are read/writte through TAP (test access port, described in ug480 chapter 3) in xadc_read/write_adc_reg, which calls the driver specific pure virtual method (xadc_zynq_read/write_adc_reg for the PS access version).  I've seen the direct memory mapped register access from the PS many times, but never through DRP.

    As discussed in an earlier blog entry, the interface into the uninstantiated (in the FPGA HW design) version (the default) of the XADC peripheral is through the JTAG signalling that encodes the DRP commands.  But fortunately, Zynq provides a command FIFO on top of the serializer module (no doubt some kind of a serial peripheral), so I don't have to worry about clocking and the data shifting.  But the driver does have to manage the potentially significant wait time required until the write is complete on the XADC side, in xadc_zynq_write_adc_reg(), which I step through below.

    The strategy is to use the command FIFO like a typical TX FIFO, and the data FIFO like a typical RX FIFO:

    1. Disable the interrupt on the data FIFO--i.e., ignore data received
    2. Write the command
    3. Clear the DFIFO interrupt status (in case it is set)
    4. Enable the interrupt on the data FIFO, with threshold = 0, meaning: interrupt on ANY data
    5. Wait for the data to be received within 1 second, using the completion primitive
    6. Read the returned data from the data FIFO

    The driver uses spinlock_irq (because this should be executed in a kernel process--i.e. not interrupt--context) to protect 1~4.  Why doesn't waiting for completion have to be protected?  That would depend on how the interrupt processing thread "completes" this completion.  complete() just signals 1 thread, while complete_all() signals all pending threads.

    To disable the data FIFO interrupt, the PS memory mapped INT_STATUS register is changed:

    xadc_zynq_update_intmsk(xadc, XADC_ZYNQ_INT_DFIFO_GTH,
    XADC_ZYNQ_INT_DFIFO_GTH);

    which is just

    xadc->zynq_intmask &= ~XADC_ZYNQ_INT_DFIFO_GTH;
    xadc->zynq_intmask |= XADC_ZYNQ_INT_DFIFO_GTH;

    xadc_write_reg(xadc, XADC_ZYNQ_REG_INTMSK,
    xadc->zynq_intmask | xadc->zynq_masked_alarm);

    The XADC JTAG DRP command is formed in a macro, according to the bitmap in ug480 Figure 3-7.

    cmd[0] = XADC_ZYNQ_CMD(XADC_ZYNQ_CMD_WRITE, reg, val);

    I would have formed this value outside a spinlock, but I guess the cost is small.  The XADC command FIFO can apparently take multiple commands:

    xadc_zynq_write_fifo(xadc, cmd, ARRAY_SIZE(cmd));

    To clear any potential interrupt for DFIFO, the driver just clears the corresponding bit, but notice it performs an unnecessary call (which is hopefully optimized out by the compiler):

    xadc_read_reg(xadc, XADC_ZYNQ_REG_CFG, &tmp);
    tmp &= ~XADC_ZYNQ_CFG_DFIFOTH_MASK;
    tmp |= 0 << XADC_ZYNQ_CFG_DFIFOTH_OFFSET;
    xadc_write_reg(xadc, XADC_ZYNQ_REG_CFG, tmp);

    The interrupt is enabled again

    xadc_zynq_update_intmsk(xadc, XADC_ZYNQ_INT_DFIFO_GTH, 0);

    Waiting for the completion returns 0 on timeout--which is confusing (being different than the usual Linux return value of 0 being OK).

    ret = wait_for_completion_interruptible_timeout(&xadc->completion, HZ);

    Note that the DFIFO is read out regardless of the outcome of waiting, as it is OK to read an empty DFIFO, and we are not returning the read value anyway.

    xadc_read_reg(xadc, XADC_ZYNQ_REG_DFIFO, &tmp);

    Reading

    Due to the fact that reading consists of the "read" command followed by a wait, followed by a no-op (just to exchange bits with the slave), reading flows similar to the write.  The differences is:
     2 consecutive writes into the command FIFO, followed by consecutive read from the data FIFO, which requires that the DFIFO threshold level now has to be 1 instead of 0, and we need to drain the DFIFO before starting (within the spinlock).  So the following snippet at the end of xadc_zynq_read_adc_reg() makes sense:

    xadc_read_reg(xadc, XADC_ZYNQ_REG_DFIFO, &resp);
    xadc_read_reg(xadc, XADC_ZYNQ_REG_DFIFO, &resp);

    And so is this, if I remember that the XADC status registers are 16 bit (some are 12 bit flushed to the MSB):

    *val = resp & 0xffff;

    Can read/write be concurrent?

    Initiating the read/write is protected within spinlock, but waiting for completion is not.  The kernel guarantees that complete() wakes up the 1st thread that started waiting for it, but since the wait_for_completion() is outside the spinlock, there is no guarantee that the read/write won't be mixed.

    xadc_probe(struct platform_device *pdev)

    The device node information in DTS is stored in a platform_device's dev.of_node, and we need to read it to probe the HW

    if (!pdev->dev.of_node)
    return -ENODEV;

    Next, the driver checks that the device node compatible string matches my declared string.

    id = of_match_node(xadc_of_match_table, pdev->dev.of_node);

    But don't you think this is unnecessary?  The match table was already declared statically when this platform driver was declared!

    static struct platform_driver xadc_driver = {
    .probe = xadc_probe,
    .remove = xadc_remove,
    .driver = {
    .name = "xadc",
    .of_match_table = xadc_of_match_table,
    },
    };
    module_platform_driver(xadc_driver);

    If the kernel is probing this driver, it must be because of the the of_match_table matched the compatible string in the DTS node!

    Maxim Ripard explained iio_allocate_device() in his introduction of the IIO framework, but devm_iio_device_alloc() wrapper will also manage the private memory (convenient!) by automatically freeing the device when the device detaches (Q: when is that?).

    indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*xadc));

    The device's private data is actually at the end of the iio_device structure, so getting the pointer to the device's private data relies on pointer arithmetic:

    #define IIO_ALIGN L1_CACHE_BYTES
    static inline void *iio_priv(const struct iio_dev *indio_dev)
    {
    return (char *)indio_dev + ALIGN(sizeof(struct iio_dev), IIO_ALIGN);
    }

    Note the L1 cache consciousness brought to bear on the data structure layout!

    xadc_ops should be pointing to either xadc_zynq_ops or xadc_axi_ops: the only 2 static xadc_ops instances, hooked up to the matching DTS entry through the match table we saw in the previous section.  So the following call magically points the dynamically found device at the correct "virtual method table"--beautiful!

    xadc->ops = id->data;

    What does the XADC use completion for?  For that matter, MT semantics of any driver are poorly documented (and understood even?).  XADC also has mutex, spinlock, and delayed work.

    init_completion(&xadc->completion);
    mutex_init(&xadc->mutex);
    spin_lock_init(&xadc->lock);
    INIT_DELAYED_WORK(&xadc->zynq_unmask_work, xadc_zynq_unmask_worker);

    The standard pattern for IO mapping the memory mapped registers to the Kernel virtual memory is ioremap_resource:

    mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
    xadc->base = devm_ioremap_resource(&pdev->dev, mem);

    Obscure: platform device is the parent of the IIO device!

    indio_dev->dev.parent = &pdev->dev;
    indio_dev->dev.of_node = pdev->dev.of_node;

    According to the Xilinx doc ug480, this device does not have HW buffer, but DOES have a HW trigger capability driver.  So why doesn't the device driver specify INDIO_BUFFER_TRIGGERED?

    indio_dev->modes = INDIO_DIRECT_MODE;

    Let's see down the road what omitting the triggered buffer forces the upper layer to do.

    This driver apparently offers a lot of interface to sysfs, through the iio_info method interface:

    static const struct iio_info xadc_info = {
    .read_raw = &xadc_read_raw,
    .write_raw = &xadc_write_raw,
    .read_event_config = &xadc_read_event_config,
    .write_event_config = &xadc_write_event_config,
    .read_event_value = &xadc_read_event_value,
    .write_event_value = &xadc_write_event_value,
    .update_scan_mode = &xadc_update_scan_mode,
    .driver_module = THIS_MODULE,
    };

    indio_dev->info = &xadc_info;

    So far, I've only used the read_raw(), in the previous section.

    It's interesting that the HW config is read from DT:

    ret = xadc_parse_dt(indio_dev, pdev->dev.of_node, &conf0);

    The HW features this driver supports are:
    • xlnx,external-mux: none, single, dual
      • xlnx,external-mux-channel
    • xlnx,channels: defaults to the list below
      • reg
      • xlnx,bipolar: means signed
    static const struct iio_chan_spec xadc_channels[] = {
    XADC_CHAN_TEMP(0, 8, XADC_REG_TEMP),
    XADC_CHAN_VOLTAGE(0, 9, XADC_REG_VCCINT, "vccint", true),
    ...

    The AXI device is high performance, so buffering is enabled for "xlnx,axi-xadc-1.00.a",, which uses the xadc_axi_ops virtual interface, holding the XADC_FLAGS_BUFFERED option (but STILL does NOT set INDIO_BUFFER_TRIGGERED mode bit).

    if (xadc->ops->flags & XADC_FLAGS_BUFFERED) {
    ret = iio_triggered_buffer_setup(indio_dev,
    &iio_pollfunc_store_time, &xadc_trigger_handler,
    &xadc_buffer_ops);

    xadc->convst_trigger = xadc_alloc_trigger(indio_dev, "convst");

    xadc->samplerate_trigger = xadc_alloc_trigger(indio_dev, "samplerate");
    }

    A triggered buffer IS a buffer, so has buffer setup virtual interface.

    static struct iio_buffer_setup_ops xadc_buffer_ops = {
    .preenable = &xadc_preenable,
    .postenable = &iio_triggered_buffer_postenable,
    .predisable = &iio_triggered_buffer_predisable,
    .postdisable = &xadc_postdisable,
    };

    I find it odd that a trigger is specified by only the name.  I find the whole IIO trigger and buffer to be poorly explained.  Maybe I can understand IIO buffer if I understand how the driver interacts with the HW.  But I can tell it's going to take more than a paragraph, so I will defer to a separate section below.

    The device only uses 1 clock (according to DTS), so there should be no confusion about the which clock we want even if NULL is specified as the clock name:

    xadc->clk = devm_clk_get(&pdev->dev, NULL);

    And then we can prepare and enable the clock: clk_prepare_enable(xadc->clk), through the clock implementation (hits HW registers).

    Setup (xadc_zynq_setup for the CPU restricted HW I designed in Vivado) takes the IRQ number specified in the DTS:

    ret = xadc->ops->setup(pdev, indio_dev, irq);

    Besides resetting the XADC HW and clearing interrupts, xadc_zynq_setup() sets the interrupt mask (currently hard coded to ~0, or all events) and configures the HW with the following bits:
    • XADC_ZYNQ_CFG_ENABLE: enable PS access of the XADC
    • XADC_ZYNQ_CFG_REDGE: read starts on rising clock edge
    • XADC_ZYNQ_CFG_WEDGE: write starts on rising clock edge
    • tck_div, the divisor necessary to achieve something close to the requested XADC tick rate
    • XADC_ZYNQ_CFG_IGAP(20): minimum idle gap between successive commands.  Not sure why this is necessary...
    Then I meet a threaded IRQ:

    ret = request_threaded_irq(irq, xadc->ops->interrupt_handler,
    xadc->ops->threaded_interrupt_handler,
    0, dev_name(&pdev->dev), indio_dev);

    Since other more seasoned Linux developers raise questions about the utility of this  "new" (circa 2009) API  I don't feel guilty about being confused about the benefit of this API compared to the traditional top/bottom half processing.  From what I can gather, the major benefits are:
    • The quick handler (threaded_interrupt_handler in the above example) will return from interrupt even faster than the traditional top half
    • The top/bottom half processing discipline is enforced, because the quick handler just acknowledges the interrupt, and the slow handler does EVERYTHING else, and in a blocking threaded context.
      • In the current top/bottom half scheme, sometimes you have to ensure mutual exclusion while transferring the HW data from the top half context to the bottom half context.  Avoiding this scenario reduces the time spent in ISR, and simplifies locking.
      • SoftIRQ is inflexible (usually have to create your own) concurrent across multiple CPU, locking is a bitch.
      • tasklet of the same kind is serialized across CPUs, but cannot block (because it runs in softirq context)
      • I can't find any fault with workqueue; people complained about resource heaviness, but you can even use the default event workqueue provided by the kernel.  Could it be that the workqueue scheduling latency is too large?

    So I am just going to make a mental model of threaded_irq as an easier to use version of the workqueue.  I will come to the handlers shortly, to understand this new threaded IRQ better.

    I can potentially use the thresholds to have the HW generate an alarm when say the temperature exceeds certain level.  probe() reads the current threshold values:

    for (i = 0; i < 16; i++)
    xadc_read_adc_reg(xadc, XADC_REG_THRESHOLD(i),
    &xadc->threshold[i]);

    Next, bipolar/unipolar settings per channel are written to the HW index.  The fact that the bipolar settings registers are 16 bit is inferred from page 58 of the Xilinx doc ug480--but only after scratching the head while the code; you have to stay alert while reading these datasheets!

    bipolar_mask = 0;
    for (i = 0; i < indio_dev->num_channels; i++) {
    if (indio_dev->channels[i].scan_type.sign == 's')
    bipolar_mask |= BIT(indio_dev->channels[i].scan_index);
    }
    ret = xadc_write_adc_reg(xadc, XADC_REG_INPUT_MODE(0), bipolar_mask);
    ret = xadc_write_adc_reg(xadc, XADC_REG_INPUT_MODE(1),

    The  mutex created above (when creating locking primitives) is used when writing the XADC register.  I need to check if the same mutex is used in the fast read/write path.

    I think it's a good practice to first disable the alarms before changing the alarm thresholds:

    xadc_update_adc_reg(xadc, XADC_REG_CONF1, XADC_CONF1_ALARM_MASK,
    XADC_CONF1_ALARM_MASK);

    To enable the HW, the active channel bits are set in the sequencer register; again 16 bits each:

    ...
    ret = xadc_write_adc_reg(xadc, XADC_REG_SEQ(0), scan_mask & 0xffff);
    ret = xadc_write_adc_reg(xadc, XADC_REG_SEQ(1), scan_mask >> 16);

    And this driver uses continuous scanning (i.e. no access to event driven scanning):

    ret = xadc_update_adc_reg(xadc, XADC_REG_CONF1, XADC_CONF1_SEQ_MASK,
    XADC_CONF1_SEQ_CONTINUOUS);

    All XADC blocks (A and B) are powered up when we write 0 to the power down register:

    return xadc_update_adc_reg(xadc, XADC_REG_CONF2, XADC_CONF2_PD_MASK, val);

    When the driver is registered to the IIO framework, it can be used.  I am not sure how important remembering the private data of a platform driver (which is the parent of this device, as we saw above) is; given that it is done AFTER registering with the IIO framework, I guess it's not super important.

    ret = iio_device_register(indio_dev);
    platform_set_drvdata(pdev, indio_dev);

    Writing to XADC?!

    Of course you can't write to an ADC.  But you can write the new desired sample rate, as you can see in xadc_write_raw, which is the implementation of the write_raw() IIO API fro this device driver, which ignores all other request except IIO_CHAN_INFO_SAMPLE_FREQ, corresponding to the /sys/bus/iio/devices/iio\:device0/sampling_frequency sysfs file we saw earlier:

    static int xadc_write_raw(struct iio_dev *indio_dev,
    struct iio_chan_spec const *chan, int val, int val2, long info)
    {
    struct xadc *xadc = iio_priv(indio_dev);
    unsigned long clk_rate = xadc_get_dclk_rate(xadc);
    unsigned int div;

    if (info != IIO_CHAN_INFO_SAMP_FREQ)
    return -EINVAL;
    ...

    The sysfs file name is mapped to IIO_CHAN_INFO through the iio_chan_info_postfix array, used when forming the sysfs name (<>/drivers/iio/industrialio-core.c):

    static const char * const iio_chan_info_postfix[] = {
    [IIO_CHAN_INFO_RAW] = "raw",
    ...
    [IIO_CHAN_INFO_SAMP_FREQ] = "sampling_frequency",
    ...
    };

    I've never seen the above notation for assigning array members statically; clever!

    Reading raw XADC

    When I read in_temp0_raw sysfs file in the XADC introduction, I went through the IIO_TEMP and IIO_CHAN_INFO_RAW, set up (along with the scale and offset) in XADC_CHAN_TEMP seen earlier:

    #define XADC_CHAN_TEMP(_chan, _scan_index, _addr) { \
    .type = IIO_TEMP, \
    .indexed = 1, \
    .channel = (_chan), \
    .address = (_addr), \
    .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | \
    BIT(IIO_CHAN_INFO_SCALE) | \
    BIT(IIO_CHAN_INFO_OFFSET), \
    .info_mask_shared_by_all = BIT(IIO_CHAN_INFO_SAMP_FREQ), \
    .event_spec = xadc_temp_events, \
    .num_event_specs = ARRAY_SIZE(xadc_temp_events), \
    .scan_index = (_scan_index), \
    .scan_type = { \
    .sign = 'u', \
    .realbits = 12, \
    .storagebits = 16, \
    .shift = 4, \
    .endianness = IIO_CPU, \
    }, \
    }

    Reading the corresponding sysfs file will eventually hit xadc_read_raw(), which handles the different info request (raw/scale/offset) in a switch statement.  Reading the ADC should just be reading the latest value pending in the HW FIFO, which TAKES the mutex we discussed earlier (so the critical path may be blocked out!).  But this function enforces that read is different for the buffer enabled mode:

    case IIO_CHAN_INFO_RAW:
    if (iio_buffer_enabled(indio_dev))
    return -EBUSY;

    If reading is not buffered or DMAed, we shouldn't need to care about interrupt, so what goes in the ISR?

    Handling data received and alarm interrupt

    Zynq (vs. AXI) version of the quick handler (the ISR proper) is xadc_zynq_interrupt_handler.  The 1st thing to do is checking if any event of interest is pending in the interrupt status register.

    xadc_read_reg(xadc, XADC_ZYNQ_REG_INTSTS, &status);

    status &= ~(xadc->zynq_intmask | xadc->zynq_masked_alarm);

    if (!status)
    return IRQ_NONE;

    Note that if no event is of interest against the intmask or the alarm mask, the ISR returns without acknowledging the interrupt?  Shouldn't the driver acknowledge ALL interrupts, rather than what it is interested in?

    xadc_write_reg(xadc, XADC_ZYNQ_REG_INTSTS, status);

    Anyhow, this, and the rest of the HW ISR is done inside the spin lock we saw earlier, when writing/reading the command to/data from XADC HW over JTAG DRP, because the ISR wakes up the pending read/write thread if the DFIFO threshold exceed interrupt is pending, and updates the pending alarms (before disabling all XADC interrupts if alarm is pending):

    if (status) {
    xadc->zynq_alarm |= status;
    xadc->zynq_masked_alarm |= status;
    xadc_zynq_update_intmsk(xadc, 0, 0);
    ret = IRQ_WAKE_THREAD;
    }

    xadc_zynq_threaded_interrupt_handler is the SW (threaded) ISR--now made possible with the threaded_irq kernel API.  In it, the driver takes a blind-IRQ disabling spin lock, which should only be used if it is permissible to re-enable the IRQ when spin_unlock_irq is called shortly.

    spin_lock_irq(&xadc->lock);
    alarm = xadc->zynq_alarm;
    xadc->zynq_alarm = 0;
    spin_unlock_irq(&xadc->lock);

    This  spinlock protects the driver's zynq_alarm field, which was set in the HW ISR above, INSIDE the same spinlock.  Then, outside the spinlock, the kernel ISR thread can leisurely handle the pending alarm bits set in the copied out alarm. 

    xadc_handle_events(indio_dev, xadc_zynq_transform_alarm(alarm));

    schedule_delayed_work(&xadc->zynq_unmask_work,
    msecs_to_jiffies(XADC_ZYNQ_UNMASK_TIMEOUT));

    The last line to schedule delayed work's necessity is explained in the comment for the callback function for the delayed work:
    The ZYNQ threshold interrupts are level sensitive. Since we can't make the threshold condition go way from within the interrupt handler, this means as soon as a threshold condition is present we would enter the interrupt handler again and again. To work around this we mask all active thresholds interrupts in the interrupt handler and start a timer. In this timer we poll the interrupt status and only if the interrupt is inactive we unmask it again.
    This pattern seems like one to reuse many times.

    Triggered buffer

    As promised above, I came back to the trigger and buffer that the Xilinx supplied DTS for Zedboard does NOT use.  probe() set up a triggered buffer, using the driver supplied poll function top half handler (the bottom half poll callback is the IIO framework supplied iio_pollfunc_store_time).  There seems to be 2 kinds of XADC trigggers: conversion start, and sample rate.  But what is the precise usage?

    Since the whole mechanism is a bit of a mystery, I will just start with the top half handler.  xadc_trigger_handler() registered to the triggered buffer setup is irqreturn_t, but is it a HW ISR?

    WIP.

    1 comment: