• Cloud to Cable Patent Updates

    Cloud to Cable Patent Officially Issued (2nd Patent)

    The new patent also covering “Cloud to Cable TV” was issued on December 11th, 2019.

    What does Cloud to Cable Patent Covers?

    Cloud to Cable is a patented solution for music streaming providers to distribute content to MVPDs. Amplify your ooffering from online streaming to Cable TV & IPTV systems with linear channels and SVOD subscriptions. Create visually appealing streams with great sound, bundled with a mobile experience
    through the MEVIA app.

    Patents: US 10,123,074, and 10,524,002 with European Patent filed/PCT.

    Music and Video are ready in all broadcasting platforms for easy monetization from your affiliates in MVPD, IPTV, Smart TVs & Mobile systems.

    Cloud to Cable are high-performance servers ready for your customer’s CABSAT headend, with a fault-tolerance design for quick integration. The
    content is available in mobile applications and Cable TV Broadcasts as SVOD or linear channels, all at once

    Cloud to Cable TV patent Issued

    10,524,002 Patent Now Available

    Cloud to Cable Patent Portfolio

    As of December 11th, 2019, the USPTO officially issued US Patent 10,524,002 covering aspects of Cloud to Cable TV that were not covered in the initial patent. I received a notification today of my 12th issued US Patent and hopefully more to come in the coming years.

    This patent includes several claims that include: Generation of a parallelized set of MPEG TV / DVB Broadcasted to Cable TV systems or IPTV;  MPEG TV bi-directional communication from the Set Top Box to the Cable TV system’ Virtualized versions of the broadcasting embodiment or the Cloud and other important inventions covered..

    Edge Computing for TV Broadcasting

    Both, Cloud to Cable Patents, 10,123,074 and 10,524,002 cover a device or computing system that can be embodied into an edge server located at the Cable TV premises, IPTV System, or even at newly defined 4G LTE and 5G broadcasting platforms.

    Cloud to Cable TV brings virtualization to media broadcasting and distribution.

    For licensing proposals, partnerships, don’t hesitate to reach me.

    Cloud to Cable TV Patent

    The family of patents includes now 10,123,074 and 10,524,002, both patents entail

    As shown herein, those claims include for example:

    Two way control messages from Claim 24, Claim 24 itself,

    Injection of MPEG Metadata or MPEG Frames into the stream.

    Fault-tolerance system and multicasting server for MPEG encoded video and audio,

    HTTP Live Streaming, RTSP, or HTTP Playlist

    Linear and Video on Demand (VOD) Support.

    Software Platform and Reference Implementation

    The reference implementation and production device is implemented under our “MediaPlug” or “Mevia” Appliance. In general, any server with 8-16GB of RAM, i7 Intel Processor or AMD, 2TB drive (RAID), ethernet or fiber interfaces is more than sufficient to load all docker  images and be provisioned for media delivery.

    Additional Software Requirements

    Xen Server 7.2 or higher, or Ubuntu Linux 14.04 or higher with Docker Images.
    Sources implemented with PHP, Python, C/C++, BASH, and other modules.

    Mux and Cable Headend Requirements

    The Cable Headend should consist of a Motorola-based Cherry or any other DVB/MPEG mux. All Set Top Boxes can support multicast streams directly for IPTV systems with fiber, or Coaxial with DOCSIS 2.0-3.0.  MPEG messages and encoding depends on provider.

    Formatted for Audio-only, HTML-based Standard Definition (SD), High Definition (HD), 4K, and/or Dolby-digital Sound.

  • Cloud to Cable – Second Patent Allowed

    Cloud to Cable Second Patent Allowed

    Besides US Patent 10,123,074 a new patent is allowed within the same family. A second set of claims were allowed on September 3rd, 2019 and that means that several claims that cover MPEG TV and Music broadcasting, MPEG 2-way communications, HTTP Live Streaming broadcasting, and fault-tolerance for carriers.

    The patent covers a system to deliver multiple video and audio broadcasts that combine web pages with multimedia to be delivered to cable operators.

    The following summary of inventions and claims for the following inventions:

    ✪ MPEG Broadcasting – DVB (Digital Video Broadcasting)

    ✪ MPEG 2-way broadcasting (On Demand)

    ✪ HTTP Live Streaming (Applications, OTT TV, Over-the-Top)(

    ✪ Fault-Tolerance and broadcasting

    The  claims allowed are essential for modern broadcasting systems for video, music, and web-pages

    The Cloud to Cable TV patents are a bridge between cloud systems and TV & Audio broadcasting platforms where the convergence of HTML and Virtualization make possible, what is called today Edge Computing.

    In 4G & 5G systems, Edge Computing is classified as:

    Edge computing provides compute and storage resources with adequate connectivity (networking) close to the devices generating traffic. The benefit is the ability to provide new services with high requirements on e.g. latency or on local break-out possibilities to save bandwidth in the network – data should not have to travel far in the network to reach the server. Regulatory compliance and network scalability are also important edge computing drivers. Source (Ericsson)

    In a way, Cloud to Cable brings compute and storage resources for TV broadcasting systems, either DVB, Content Delivery Networks, or other similar systems.

    You can review a summary of what’s been published by the USPTO.

    For Licensing Information:

    Licensing-Technologies-Presentation-small

     

    USPTO Public PAIR Information:

    16152606-2
  • Hadoop: Tutorial and BigData

    What’s Hadoop?

    What’s Hadoop? Hadoop is a framework or tooapache hadoopls that enable the partition and split of tasks across multiple server and nodes on a network. Hadoop then provides the required framework to MAP and REDUCE a process into multiple chunks or segments.

    Hadoop has multiple projects that include:

    Hive, Hbase, Chukwa, Tex, Pig, Spark, Tez, and some others that are designed for instance HIVE for a data warehouse that provides data summarization and adhoc querying. HBAse as well is a database that support structured data storage for large tables.

    However the common projects are: Common, HDFS, YARN (job scheduling and cluster management), and MapReduce.

    source: http://hadoop.apache.org  

    High-Level Architecture of Hadoop

    As shown in the figure from opensource.com, Hadoop includes a Master Node and Slave Node(s).  The Master Node contains a TaslkTracker and  a JobTracker that interfaces will all the Slave Nodes.

    The MapReduce Layer is the set of applications used to split the process in hand, into several SlaveNodes. Each SlaveNode will then process a piece of the problem and once completed it will be sent over from the process of “Mapping” to “Reducing,”

    hadoop-HighLevel_hadoop_architecture-640x460

    High Level Architecture of Hadoop

    MapReduce workflow

    As shown in the figure, the MapReduce logic is shown here.

    • On the left side,BigData, is a set of files or huge file, a huge log file or a database,
    • The HDFS refer to the “Hadoop Distributed Filesystem,” which is used to copy part of the data, split it across all the cluster and then later on to be merged with the data
    • The generated output is then copied over to a destinatary node.
    mapreduce-workflow

    MapReduce Workflow

    Example of MapReduce

    For example,lets say we need to count th number of words in a file, and we will assign a line to each server in the hadoop cluster, we can run the following code.  MRWordCounter()  does the job of wording each line and mapping all the jobs

    from mrjob.job import MRJob
    
    class MRWordCounter(MRJob):
        def mapper(self, key, line):
            for word in line.split():
                yield word, 1
    
        def reducer(self, word, occurrences):
            yield word, sum(occurrences)
    
    if __name__ == '__main__':
        MRWordCounter.run()

    Using : mrjob

    A music example can be found here:

    class MRDensity(MRJob):
        """ A  map-reduce job that calculates the density """
    
        def mapper(self, _, line):
            """ The mapper loads a track and yields its density """
            t = track.load_track(line)
            if t:
                if t['tempo'] > 0:
                    density = len(t['segments']) / t['duration']
                    yield (t['artist_name'], t['title'], t['song_id']), density

    As shown here, the mapper will grace a line of  file and use the “track.load_track()” function to obtain “tempo”, the number of “segments” and all additional metadata to create a density value.

    In this particular case, there is no need to Reduce it, simply it is split across the board of all Hadoop nodes.

    Server Components

    As shown in the figure below from cloudier, Hadoop uses HDFS as the lower layer filesystem, then MapReduce resides between the HBase and MapReduce (as HBase can be used by MapReduce, and finally on top of MapReduce we have Pig, Hive, Snoop and many other systems. Including an RDMS running on top of Sqoop, or Bi Reporting on Hive, or any other tool.

    hadoop ecosystem

     

    Download Hadoop

    If you want to download hadoop do so at https://hadoop.apache.org/releases.html

    References

    [1] Hadoop Tutorial 1 -3,
    [2] http://musicmachinery.com/2011/09/04/how-to-process-a-million-songs-in-20-minutes/
    [3] http://blog.cloudera.com/blog/2013/01/a-guide-to-python-frameworks-for-hadoop/
    [
    4] https://hadoop.apache.org/docs/r1.2.1/cluster_setup.html#MapReduce