Easy Hoogle usage from bash

What is hoogle?

Hoogle is Google for searching of Haskell functions. You could ask it for function name or its signature. There is available command hoogle, which could be installed using stack:
$ stack install hoogle

Using hoogle from command line {#using-hoogle-from-command-line} To

hoogle a function you could just pass it as parameter:
$ hoogle fmap
Prelude fmap :: Functor f => (a -> b) -> f a -> f b
Data.Functor fmap :: Functor f => (a -> b) -> f a -> f b
Control.Monad fmap :: Functor f => (a -> b) -> f a -> f b
Control.Monad.Instances fmap :: Functor f => (a -> b) -> f a -> f b
Data.Traversable fmapDefault :: Traversable t => (a -> b) -> t a -> t b
Network.Stream fmapE :: (a -> Result b) -> IO (Result a) -> IO (Result b) or pass its signature:

$ hoogle "(a -> b -> c) -> [a] -> [b] -> [c]"
Prelude zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
Data.List zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
Control.Applicative liftA2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f c
Control.Monad liftM2 :: Monad m => (a1 -> a2 -> r) -> m a1 -> m a2 -> m r
Prelude scanl :: (a -> b -> a) -> a -> [b] -> [a]
Data.List scanl :: (a -> b -> a) -> a -> [b] -> [a]
Prelude scanr :: (a -> b -> b) -> b -> [a] -> [b]
Data.List scanr :: (a -> b -> b) -> b -> [a] -> [b]
Data.List deleteFirstsBy :: (a -> a -> Bool) -> [a] -> [a] -> [a]
Data.List intersectBy :: (a -> a -> Bool) -> [a] -> [a] -> [a]
... But it shows only list of the signatures of the functions. Sometimes we want to see more information about function. If you use option 

-i, then additional information will be shown:
$ hoogle -i “(a -> b -> c) -> [a] -> [b] -> [c]”
Prelude zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]

zipWith generalises zip by zipping with the function given as the first argument, instead of a tupling function. For example, zipWith (+) is applied to two lists to produce the list of corresponding sums.

From package base
zipWith :: (a -> b -> c) -> [a] -> [b] -> [c] And again it is not enough, because it shows only documentation of first function from the list. We have to move a counter to see documentation of further functions, e. g. to show information about third item from the list: 

$ hoogle -i -s 3 "(a -> b -> c) -> [a] -> [b] -> [c]"
Control.Applicative liftA2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f c

Lift a binary function to actions.

From package base
liftA2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f c

Easier hoogle usage in bash {#easier-hoogle-usage-in-bash} It is cumbersome to count each time you want to read info about further functions, so I have prepared bash function which makes it easier. To search for function type:

$ hoog "(a->b) -> f a -> f b"
1) Data.Traversable fmapDefault :: Traversable t => (a -> b) -> t a -> t b
2) Prelude fmap :: Functor f => (a -> b) -> f a -> f b
3) Data.Functor fmap :: Functor f => (a -> b) -> f a -> f b
4) Control.Monad fmap :: Functor f => (a -> b) -> f a -> f b
5) Control.Monad.Instances fmap :: Functor f => (a -> b) -> f a -> f b
6) Data.Functor (<$>) :: Functor f => (a -> b) -> f a -> f b
7) Control.Applicative (<$>) :: Functor f => (a -> b) -> f a -> f b Each function will have its counter at the beginning and just add its number at the end of command to show more information about specific function: 

$ hoog "(a->b) -> f a -> f b" 6
Searching for: (a -> b) -> f a -> f b
Data.Functor (<$>) :: Functor f => (a -> b) -> f a -> f b

An infix synonym for fmap.

From package base
(<$>) :: Functor f => (a -> b) -> f a -> f b

How to install hoog command? {#how-to-install-hoog-command} Command is available

here. To use this command just add it to your ~/bashrc file.

You May Also Like

Open IMS Core Mr interface

Open IMS Core does’t have standard way to define connection to MRF (Media Resource Function) on Mr interface.In IMS Mr interface is based on SIP and is similar to ISC used by Application Server (AS). Because of that we can define MRF as IMS AS and just add Wildcard PSI that has trigger on that AS. That [...]

Recently at storm-users

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.