op {
  graph_op_name: "SparseApplyFtrl"
  in_arg {
    name: "var"
    description: <<END
Should be from a Variable().
END
  }
  in_arg {
    name: "accum"
    description: <<END
Should be from a Variable().
END
  }
  in_arg {
    name: "linear"
    description: <<END
Should be from a Variable().
END
  }
  in_arg {
    name: "grad"
    description: <<END
The gradient.
END
  }
  in_arg {
    name: "indices"
    description: <<END
A vector of indices into the first dimension of var and accum.
END
  }
  in_arg {
    name: "lr"
    description: <<END
Scaling factor. Must be a scalar.
END
  }
  in_arg {
    name: "l1"
    description: <<END
L1 regularization. Must be a scalar.
END
  }
  in_arg {
    name: "l2"
    description: <<END
L2 regularization. Must be a scalar.
END
  }
  in_arg {
    name: "lr_power"
    description: <<END
Scaling factor. Must be a scalar.
END
  }
  out_arg {
    name: "out"
    description: <<END
Same as "var".
END
  }
  attr {
    name: "use_locking"
    description: <<END
If `True`, updating of the var and accum tensors will be protected
by a lock; otherwise the behavior is undefined, but may exhibit less
contention.
END
  }
  summary: "Update relevant entries in \'*var\' according to the Ftrl-proximal scheme."
  description: <<END
That is for rows we have grad for, we update var, accum and linear as follows:
$$accum_new = accum + grad * grad$$
$$linear += grad + (accum_{new}^{-lr_{power}} - accum^{-lr_{power}} / lr * var$$
$$quadratic = 1.0 / (accum_{new}^{lr_{power}} * lr) + 2 * l2$$
$$var = (sign(linear) * l1 - linear) / quadratic\ if\ |linear| > l1\ else\ 0.0$$
$$accum = accum_{new}$$
END
}
