• Cloud to Cable TV – Intellectual Property Portfolio

    Cloud to Cable TV is the platform that makes it easy to send music channels, video channels, video on demand, and any other multimedia streaming content to millions of subscribers.  Cloud to Cable enables a friendly distribution system for media owners, content providers, to subscribers in IPTV, Cable TV, OTT, or even satellite operators.

    CLOUD TO CABLE
    Mobile and Cable TV distribution

    We have developed a technology that sim greatly simplifies multimedia delivery of deliver music & TV content to cable operators and mobile devices, all in one-stop shop. Upload your content to our online drive or cloud storage, and get ready for distribution and monetization.;

    Our intellectual property portfolio includes the following assets:

    • US patents
    • European patents
    • Software and trade secrets

    Our patented innovative technology is called:    CLOUD to CABLE

    In essence, cloud to cable brings value by bringing to you, an easy way to do:

    • Music broadcasting to Operators, mobile, IPTV, cable TV, or even satellite,
    • Modern virtualization and cloud technologies integrated into our software,
    • Broadcasting with fault-tolerance, ready for high-reliability, and great quality of service.
    • Parallel transcoding meaning you can deliver 5, 10, 50, even 100 music or TV channels depending on the hardware chosen, number of instances, and bandwidth purchased from us,
    • Web-based approach, yes all is web-based, no funky DIGICIPHER II, MPEG TS, or any of that, all if works on the web, works with Cable TV, Satellite, all and all.
    • Interactive and VOD integration
    • TV & Music all in one Platform

    PDF file with our presentation slide deck:


  • Cloud to Cable TV Patent Issued Today.

    US Patent 10,123,074

    Today, the US Pat 10,123,074 issued for Cloud to Cable, after a provisional was filed December 2014. This patent covers the currently commercialized Cloud to Cable TV system.  The process was not that long, considering that the initial firm, NOVAK DRUCE was disbanded and I had to transfer the case to Greg Nelson and Fox & Rothschild, LLP in West Palm Beach, FL.

    The patent issued  is titled: Method, system, and apparatus for multimedia content delivery to cable TV and satellite operators 

    A patent continuation was filed in October to cover other aspects of the invention, as shown in the picture the patent generates video with an MPEG Transport Stream that is compatible with a Cable TV system.

    What is Cloud to Cable?

    The main aspects and novelty of the invention include but are not limited to the following items:

    • Integration of web-elements, web pages, or HTML5-related content with rendering of this content on OTT, IPTV, and most importantly, Cable TV systems.
    • Virtualization and use of containers in the distribution of music and TV content to cable operators, by provision at virtual machine or docker container with the web page plus content for rendering purposes.
    • Essentially, the patent covers music and TV distribution to cable operators and systems for broadcasting.
    • Over-the-Top, IPTV systems, or Cable TV operators can now integrate novel ReactJS, Javascript, and all advanced mechanisms for designing user interfaces into their video feeds.
    • Audio and music channels can be integrated and created by just connecting to the web-based content provider.
    • Fault-tolerance and high-reliability is presented in the disclosure.

    In many ways, the invention covers a device, system, and methods to accomplish media distribution with ease, reducing costs, and implementing a novel mechanism that replaces satellite delivery, hardware encoders, and many other devices,


    Cloud to Cable Video

    A video can be found at EGLA’s Youtube Channel:

    https://www.youtube.com/watch?v=80G-rh6Dlns

  • NBC Universal Hackathon – Miami 2018

    This weekend I went and spent some time hacking some code at the “NBC Universal Hackathon” and trying it out new ideas, meeting new friends, and learning a ton on many technological aspects and innovating.  The particular problem that we decided to to solve was the irrelevance  aspects of current TV and how more interactive could it be with current technologies.  The way to solve it thru a collaborative experience where users can interact with their phones and cameras with the video shown on screen.

    The team was composed by: Satya, Paul Valdez, Juan Gus, Myself, and Chris.

    What we did what simple, we could create a website that could have a canvas that could be treated with effects, add the TV/Video feeds into it and  that distribute the content using a platform like “Cloud to Cable TV” to cable operators or OTT/IPTV systems.

    Cloud to Cable TV

    The solution required a few items to be setup and configured:

    • RTMP Server or WebRTC Setup to receive video feeds from Smartphones or your laptop,
    • FFMPEG to encode, compress, and publish  video/audio feeds
    • Mobile App with RTMP Client or WebRTC Client or laptop. We tried several but this one worked out ok.
    • A web application in Python to map each feed and position it on top of the TV Channel video source (assuming an M3U8 feed or a movie in MP4)

    With this in place, it is a matter compiling CRTMP, FFMPEG, and we tried other components as Deep Learning such as the “Deep Fakes” project. The idea that we had was to replace one of the actors image, as well as superimposed our live feeds into the video.

    Issues:

    • The safari browser doesn’t allow you to play content with autoplay features, meaning that the user MUST initiate a playback. If SAFARI sees or detects that onLoad the content autoplays this fails.
    • There are issues with SAFARI and dynamically loading the content and video.oncanplaythrough() is required to be added to the javascript.

    The live feeds had a delay of about 30-40seconds, as it had to:

    • Convert and push from mobile phone to RTMP Server,
    • Grab RTMP Stream and send it as an m3u8 encoded file to the website.

    The standard CRTMP Screen would look like and connections from Gus and Pablo successfully took place:

    
    +-----------------------------------------------------------------------------+
    |                                                                     Services|
    +---+---------------+-----+-------------------------+-------------------------+
    | c |      ip       | port|   protocol stack name   |     application name    |
    +---+---------------+-----+-------------------------+-------------------------+
    |tcp|        0.0.0.0| 1112|           inboundJsonCli|                    admin|
    +---+---------------+-----+-------------------------+-------------------------+
    |tcp|        0.0.0.0| 1935|              inboundRtmp|              appselector|
    +---+---------------+-----+-------------------------+-------------------------+
    |tcp|        0.0.0.0| 8081|             inboundRtmps|              appselector|
    +---+---------------+-----+-------------------------+-------------------------+
    |tcp|        0.0.0.0| 8080|             inboundRtmpt|              appselector|
    +---+---------------+-----+-------------------------+-------------------------+
    |tcp|        0.0.0.0| 6666|           inboundLiveFlv|              flvplayback|
    +---+---------------+-----+-------------------------+-------------------------+
    |tcp|        0.0.0.0| 9999|             inboundTcpTs|              flvplayback|
    +---+---------------+-----+-------------------------+-------------------------+
    |tcp|        0.0.0.0| 5544|              inboundRtsp|              flvplayback|
    +---+---------------+-----+-------------------------+-------------------------+
    |tcp|        0.0.0.0| 6665|           inboundLiveFlv|             proxypublish|
    +---+---------------+-----+-------------------------+-------------------------+
    |tcp|        0.0.0.0| 8989|         httpEchoProtocol|            samplefactory|
    +---+---------------+-----+-------------------------+-------------------------+
    |tcp|        0.0.0.0| 8988|             echoProtocol|            samplefactory|
    +---+---------------+-----+-------------------------+-------------------------+
    |tcp|        0.0.0.0| 1111|    inboundHttpXmlVariant|                  vptests|
    +---+---------------+-----+-------------------------+-------------------------+
    

    We were trying to use WebRTC but we had many issues with latency and delays.

    The FFMPEG command required and that was used for the demo was:

    ffmpeg -re  -i rtmp://96.71.39.58/live/pablo -c:v libx264 -c:a aac -ac 1 -strict -2 -crf 18 -profile:v baseline -maxrate 200k -bufsize 1835k -pix_fmt yuv420p -flags -global_header -hls_time 3 -hls_list_size 4 -hls_wrap 10 -start_number 1 /var/www/html/live/pablo.m3u8
    
    ffmpeg -re  -i rtmp://96.71.39.58/live/gus -c:v libx264 -c:a aac -ac 1 -strict -2 -crf 18 -profile:v baseline -maxrate 200k -bufsize 1835k -pix_fmt yuv420p -flags -global_header -hls_time 3 -hls_list_size 4 -hls_wrap 10 -start_number 1 /var/www/html/live/gus.m3u8

    The Mobile App published an RTMP Stream to the server under /live/pablo and /live/gus.  The demo video on what it will look like:

    https://vimeo.com/299048743
    Screen capture in Vimeo using Safari

    For screen capturing in a Mac with FFMPEG with 3 screens, list your devices and capturing to avoid any MOOV issues and useless MOV/MP4 files.

    ffmpeg -f avfoundation -list_devices true -i "" 
    
    ffmpeg -f avfoundation -i "3:0" -t 120 -pix_fmt yuv420p -c:v libx264 -c:a libmp3lame -r 25 teleport.mov

    The presentation we made to the judges at the “NBC Universal Hackathon” can be found here:

    https://docs.google.com/presentation/d/1sKAvnC-Y-KHu2qclulH2Fp-8yWvTslq6bLaocyEgtfQ/edit?usp=sharing

    The source code consists on an HTML site using DOM objects, video source, and a canvas. As shown, the video is hidden it is native format in ways that you can use canvas drawing to copy the video from the “src” in m3u8, MOV, MP4 or whatever format your browser can handle to the canvas. The canvas is then the placeholder for all the overlays and divs. The idea with the canvas is that messages can then by typed and exchange between users, as a WhatsApp application or any other chat application that uses the canvas.

    var canvas = document.getElementById("c");
    var context = canvas.getContext("2d");
    
    window.onload = function() {
     // document.getElementById("fb-profile").style.display = "none";
      
        var canvas = document.getElementById("c");
        var context = canvas.getContext("2d");
        // grab the video element
        // var video = document.getElementById("v");
        
        // drawVideo(context, video, canvas.width, canvas.height);
        // calls drawVideo() function, passing all the objects
    
    }
    
    var splayer = {};
    
    function showIt(id, url, hideOrNot) {
      console.log(id+"  "+url+ " setting it to " +hideOrNot); 
    
      splayer["v_"+id] = document.getElementById("v_"+id);
      document.getElementById(id).style.display = hideOrNot;
      if (document.getElementById(id).style.display == "none" ) { 
         document.getElementById(id).style.display = "block";
         var vId = "vsrc_"+id; 
         console.log("playing "+vId + "  "+url);
         document.getElementById(vId).src = url;
         if (splayer["v_"+id].paused) { 
            console.log("Video paused.... ");
            splayer["v_"+id].load();
            splayer["v_"+id].oncanplaythrough = function() {
                splayer["v_"+id].play();
             };
         } else {
           console.log("Video is playing already..."); 
         }
      } else {
         console.log(" Stopping .... v_"+id);
         splayer["v_"+id].pause();
         document.getElementById(id).style.display="none";
      }
    }
    
     var player = document.getElementById("v");
     
    function ChangeHarry(){
        console.log("Playing Harry Potter.... ");
        document.getElementById("vsrc_main").src = "http://s3.us-east-2.amazonaws.com/teleportme/videos/HarryPotterClip.mp4";
        player.load();
        player.play();
        drawVideo(context, player, canvas.width, canvas.height);
    }
    
    function ChangeQueen(){
      console.log("Playing Queen of the South ... ");
      player.pause();
      document.getElementById("vsrc_main").src="http://96.71.39.58/queen0.mp4";
      player.load();
      player.play();
      // drawVideo(context, player, canvas.width, canvas.height);
    }
    
    setTimeout(function() {
           showIt ("first", "https://mediamplify.com/teleport/iwantharry.mp4", "none");
           setTimeout(ChangeHarry, 6000);
         } , 2000 );
    
    setTimeout(function() { 
          showIt ("first", "https://mediamplify.com/teleport/iwantharry.mp4",  "block"); 
    }, 8000 ); 
    
    setTimeout(showIt, 5000, "second", "http://96.71.39.58/live/pablo.m3u8", "none");
    setTimeout(showIt, 6000, "third",  "http://96.71.39.58/live/gus.m3u8", "none");
    console.log("Starting changing stuff"); 
    
    setTimeout(function() {
                console.log("Preeping to switch to Queen of the South" ); 
                showIt ("first", "https://mediamplify.com/teleport/iwantqueen.mp4", "none"); 
              }, 13000);  
    
    setTimeout(showIt, 15000, "third",  "http://96.71.39.58/live/pablo.m3u8", "none"); 
    setTimeout(showIt, 15010, "second", "http://96.71.39.58/live/gus.m3u8" ,  "none"); 
    
    // setTimeout(showIt, 20000, "third", "http://96.71.39.58/live/gus.m3u8", "none"); 
    setTimeout(function() { 
                console.log("Queen of the South");
                ChangeQueen();                        
                showIt("first", "", "block");
               }, 19000); 
    
    
    
    function fbProfile() {
        var x = document.getElementById("fb-profile");
        if (x.style.display === "none") {
            x.style.display = "block";
        } else {
            x.style.display = "none";
        }
    }
    
    function drawVideo(context, video, width, height) {         
       context.drawImage(video, 0, 0, width, height); // draws current video frame to canvas     
       var delay = 100; // milliseconds delay for slowing framerate
       setTimeout(drawVideo, delay, context, video, width, height); // recursively calls drawVideo() again after delay
    }

    For a functional demo, 1st allow the site to play video in autoplay:

    Update your settings in SAFARI

    We didn’t win the “NBC Universal Hackathon” but had a ton of fun doing it!.  We failed in the presentation, it was only 3 minutes and our presenter disappeared in the last minute, and Gus improvised and didn’t use all the time provided by the judges. We knew we were done when no questions were asked. …. Anyways!!! You cannot always win.


  • Cloud to Cable Patent Issued & Inventors Protection Act Updates

    Patent Issued for One of Our Media Streaming Technologies

    As you have read in my blog, Cloud-to-Cable is a platform and technology that merges the worlds of cloud and Cable TV.

    One of the main objectives of the patent and as you can read in the description is to:

    ” ……   systems, devices, methods, and non-transitory computer-readable media for media delivery from a cloud or IP-based distribution network, or CDN (Content Delivery Network) to MSO (Multiple System Operators) or head-ends that include cable and satellite delivery mechanisms. The present technology unifies cloud-based delivery (Mediamplify cloud) with the cable-based mechanism (e.g. Comcast, Verizon FiOS).” 

     

    https://www.slideshare.net/edwinhm/what-is-cloud-to-cable-tv-mevia-platform

     

    I filed for patent protection in the US and Europe for the methods and system called: “Method, system, and apparatus for multimedia content delivery to cable tv and satellite operators” now protected under US Patent 10,123,074 that will be published and formally issued on November 6th, 2018.  This patent protection gives me the exclusive rights in the US, and we are applying to Europe and other countries, for our method and system to broadcast TV and Music stations on Cable TV, OTT, IPTV, and other MVPD systems. The technology in place is in use by our Music for Cableproduct and is now branded under MEVIA : Cloud to Cable. The patents are owned by me and are available for licensing, including all specifications, and software to create our broadcasting network appliance (MEVIA Network Appliance)

    Ted Deutch – Boca’s Congress Representative on HR.6557 – Inventors Protection Act

    Ted Deutch is a well-known congressman that represents the area were EGLA COMMUNICATIONS is located, and after sending some emails to Mr. Deutch regarding how the new system in place for patent enforcement is no longer favorable for innovation and on the contrary is a incentive to steal technology and use it without paying royalties.  The HR. 6557 is a new legislation called “Inventors Protection Act” that aims to assist and help inventors protect their Intellectual Property after their patents have been issued by the USPTO.

    Let’s recall new changes made in the USPTO regarding software patents, specially Alice and American Invent Act changes made, not forgetting IPRs and the PTAB processes now in place.

    EGLA is glad to see our Congress react and move in the right direction and we are hopeful these efforts will fructify soon.

    A few years back our team received a visit or this congressman at our offices in FAU – Research Park, before we moved to the EGLAVATOR.

  • Music Choice Patent Axed Partially by PTAB

      

    Late September and early October total of two judgments have been filed thus far by the Patent Trial and Appeal Board (PTAB) in the case against Music Choice’s patent. As indicated in the rulings for IPR2017-00888 and IPR2017-01191, partial axing fo the patents have been granted to Stingray or against Music Choice’s interest.  Interesting findings that enforce the value of our cloud to cable TV patents.

    The final written decisions are available herein:

    As Law360 points out:

    ” The case split the PTAB panel, with each of the three judges filing opinions. The majority ruling by Judge Mitchell Weatherly held that eight of the patent’s 20 claims are invalid because Stingray showed that a person skilled in the art would be motivated to combine the earlier inventions to arrive at Music Choice’s invention, which includes both a playlist and on-demand playback. Source: https://www.law360.com/media/articles/1091845/ptab-partly-axes-music-choice-patent-in-row-with-rival “

    This goes in accordance to the prediction made by myself back when this issue was discussed initially, specially in the article where I analyzed the acquisition of Music Choice by Stingray digital.

    Infringement and Damages Model

    As expected Music Choice should be more interested not so much on the “Video on Demand” revenues which is what these two patents cover, but more so in the patent covering the method and system for broadcasting to Cable TV which is the main source of revenues for both companies. The patent in dispute will be resolved this Friday October 19th, 2018, and the patent 8,769,062 as well as the original applications from 2001-08-28 which is claimed as priority date.

    It is my opinion that a similar outcome may result from the PTAB panel.

    Hence, what would the damages be for Music Choice? Well, I don’t have access to any information but what is found online, and in a letter from 2014, Music Choice was claimed to have 56M homes, or 57M listeners per month.  Source: https://www.justice.gov/sites/default/files/atr/legacy/2014/08/18/307851.pdf ‘

    In this document, Music Choice indicates that their Multi-Video Programming Distributors (MVPD) constitutes a main source of income for the company. Additionally, in a marketing material VOD distribution is said to have 72M subscribers as of Music Choice’s 2016 PDF :

    http://corporate.musicchoice.com/files/2214/6228/4671/2016_Music_Choice_Media_Kit.pdf

    In a similar filing, Music Choice makes a case with a rebuttal testimony by Gregory S. Crawford, PhD https://www.crb.gov/rate/16-CRB-0001-SR-PSSR-SDARSIII/rebuttals/2-21-17-music-choice.pdf 

    Hence, assuming a price point of let’s say 0.05c to 0.25c/subscriber and recalling what Mr. Boyko said in a prior-conference call, Music Choice should have revenues could be:

    • 0.05c per subscriber @ 50M subs = $30M/year
    • 0.25c per subscriber @ 50M subs = $150M/year

    Assuming now that, Stingray has landed $8M/quarter with an increase of 102.2% or 23.5% o the total revenues, from the 4th Quarter Report available online

    ” … or the quarter, Canadian revenues decreased 2.6% to $13.6 million (41.3% of total revenues) due to decrease in non-recurring revenues related to digital signage, United States revenues increased 102.2% to $7.8 million (23.5% of total revenues), whereas revenues in Other Countries increased by 34.4% to $11.6 million (35.2% of total revenues). (CNBC source: https://www.cnbc.com/2018/06/07/globe-newswire-stingray-reports-fourth-quarter-2018-results.html) “

    What this means is that somehow per year $32M/year o revenues were eroded from Music Choice’s pie, or since 2014 when the lawsuit was said to have had some issues with Stingray, then the total loss could be $32×4=$120M on damages.  Which, it seems to have been the offering made by Stingray digital to Music Choice for a full acquisition.

    The question is, what would be the forward revenues would be for a Georgia-Pacific model that a damages expert would have done for pricing moving forward and how Stingray Digital would have to pay for any future earnings to Music Choice. The information collected indices a 35% tax by Stingray on royalties and potentially another 20-30% which in turn would make operations in the United States at no profit or even at a loss.


    Conclusion

    It is very hard to compute a damages calculation, assuming that infringement takes place in the surviving claims of the patents in dispute. Hence, if damages were to be computed should be in the range of 8-9 digits with upside to the future, not including potential treble damages which could account for more money to be paid.

    Alternatively, Cloud to Cable TV  is the best technological platform for monetization with Cable TV and works using the most recent advancements in Cloud computing, Web technologies, and in combination with standard DVB Systems.  A unified patented technology for Cable TV distribution!

    https://eglacomm.net/cloud-to-cable-tv/
    Cloud to Cable TV  – US Patent 10,123,074

  • What is Cloud to Cable TV?

    What is Cloud to Cable TV?

    Use Case : Music for Cable | Amplify your Reach®

    First Patent is Allowed and will be granted

    The patent filed for an important component of the “CloudtoCableTV” architecture has received a “Notice of Allowance” meaning that a patent will be granted as soon as fees are paid by me.  I will also file for continuations and other divisional, including the European Patent Office action that is also pending as part of a PCT Filing.   This is the first patent created and issued at the “EGLAVATOR

     

    The Problem

    Creating any “TV/Cable Network” is difficult.  The complexity of content distribution to cable/mobile operators (“affiliates”) is enormous and requires time, effort, lots of capital, and the use of multiple complex technologies.  As one example, satellite time required to distribute TV/music content cost thousands if not millions of dollars per year. In the case of music distribution, this is more complex, as revenues may need to be split among multiple brokers and intermediate agents.

    Additionally, current cable TV subscribers want to consume their TV and music content ontheir mobile devices and tablets. Users want to enjoy their cable TV subscriptions at home, school and office,  — any time, any where.

    Over-the-Top Platforms (OTT) are widely used today to sell individual subscriptions but not that many systems can reach out to millions of viewers without Cable TV’s help. Hence, Cable TV distribution provides a volume monetization outlet by tapping into millions of subscribers worldwide. Cable TV is the best monetization outlet for new networks including music channels, TV, and video Video on Demand (VOD) content.

    In this white paper, we introduce MEVIA as a novel platform solution for content distribution to mobile, web, and Cable/Satellite TV systems. MEVIA effectively reduces cost and maximizes returns.

     

    The Solution

    MEVIA is a unifiedmultimedia platform that enables quick and easy distribution of TV, video, and music package content to cable and mobile operators.  MEVIA connects the worlds of web/mobile with Communications Service Provider (CSP) or Multi-System Operator (MSO) content distribution headends.

    Our “Cloud to Cable” technology is a patented system that distributes and delivers TV, music, and video channels to satellite and cable TV operators as well as to mobile/web, providing a unified user experience. MEVIA is true to our “Amplify your reach®”slogan.

    MEVIA also includes customizable mobile applications and specialized equipment for Satellite and Cable TV broadcasting.

    When a content owner decides to distribute their content with MEVIA, the first step is to load TV feeds, music assets, and/or video content into MEVIA storage or ingest servers. The content is then made available securely in all the affiliate systems on Cable TV and mobile/web distribution via our mobile application. Finally, a content ownermay define special playlists and grid programming.Different pricing structures can be enables such as on aper subscriber-basis,per download, or a flat rate.

    What Type of TV, Music, and Video Offers are Available?

     

    The range of multimedia services that can be offered are:

    • VOD or Video On Demand
    • Linear Television Networks

     

    A media owner could offer, for example, a Cable TV and Mobile package that includes:

    • Thousands of VOD files
    • 50+ Music channels with customizable screens
    • 5+ Linear concert channels

    What Type of Formats?

    MEVIA uses all commonly available encoders and transcoders for audio and video, hence any file from any format can be ingested, processed and broadcasted. The most popular formats are MPEG, MP4, with encoding in H.264,. H.265, AAC, AC-3, and MP3.

    How does MEVIA Work?

    In essence, MEVIA connects to any web-based platform, rendering its contents and preparing multiple broadcast-ready streams for operators, mobile, and web.

    These streams can deliver:

    • Music with enhanced metadata
    • TV/Video with real-time enriched web-based information, such as twitter feeds
    • Music-only content

     

    In summary, Cloud to Cable and MEVIA provides three main delivery mechanisms:

    • Applications – Mobile and Web
    • Linear streams – Cable Systems and Satellite Operators
    • Over-the-Top Applications for Apple TV, Chromecast, Smart TVs,and private systems

     

     

    We will present howthe business model works, some case studies. and our mobile application.

    Business Model

    The business model used by MEVIA issubscriber-based and perfectly aligns with the proven “Multichannel Video Provider Distributor” (MVPD)business model.  In this model, operators purchase packages from companies such as Time Warner, SONY, ESPN, Disney, CNN, and many others at prices that range from cents to several dollars paid per subscriber.  The MVPD generates revenue through adding targeted distribution capability.  In this case, an MVPD will use MEVIA to purchase TV, Music, and Video content, pay the content owner and resell that content rights to all their subscribers as part a “Digital” or “Premium” package, or as any format that the operator chooses to use. For Example a Cable TV Operator may purchase ESPN package for $3.95/subscriber and sell a premium package with ESPN For $29.99/subscriber, it is likely that other similar network would cost the operator between cents per subscriber to a few dollars.  Package pricing depends on volume and in some cases, years of negotiations and agreements.

     

     

     

     

     

     

     

     

     

     

     

    MEVIA will provide as many channels as are included in an agreement with a specific provider,and will deliverthat contentto the operator in the format that their system supports– Linear TV, VOD, or Interactive.

     

    Case Study: CABLEVISION MEXICO

    As an example, CABLEVISION MEXICO needed 50+ music channels branded under their name “CABLEVISION”in market.  They provided a set of backgrounds that were used for customizable screensbroadcasted to their users on channels 800-850. The broadcast should include their logo and artist/song metadata, as shown here:

     

    MEVIA created all the music channels simultaneously and broadcasted a lineup ready for more than 1 Million subscribers in Mexico City. Similar screens were made for AXTEL TV, a smaller operator in Mexico City.

     

     

     

     

     

     

     

     

     

     

     

    Sample Set Top Boxes for DMX and CABLEVISION Music


    Case Study: MOOD MEDIA

    MOOD MEDIA ingested thousands of song files into MEVIA’s storage platform via secured FTP (SFTP). The files were stored in 256Kbps format in some cases where stored in MP3 in others AC-3.\

    The multimedia content might be hosted within MEVIA’s storage platform and content management. In this case study, a product was created for DMX Music/MOOD Media in 2013-2015 timeframe where all the assets were hosted by MEVIA. MEVIA applications and platform was used to synchronize up to 10+ Cable Operators broadcasting multiple packages with 50+ music channels,some with audio-only some others with video and metadata.   MOOD MEDIA had over 20M subscribers in operators that included TIGO, CLARO, and many others.

    In this case, a customized HTML web application and native applications were used together for mobile/web and OTT that complemented the Cable Operator offering.

     

     

     

     

     

     

    Demonstration: Case Study Using Spotify (Internal Test)

    Assume a CSPhas decided to make a deal with “Spotify” and would like to broadcast music to  2M subscribers with a package composed of 50 music channels from Spotify and a few music video channels from a different provider,VEVO.

     

    Without our Cloud to Cable technology, this would be a daunting task, besides the associated cost for satellite fees, and additional complications.  In this figure, MEVIA facilitates distribution of a web application to be part of the Cable TV channel line-up.

     

    MEVIA Cloud will connect to the web-provider and retrieve all required webassets that are currently in use by Spotify. The authentication and authorization can also be linked in connection with the Cable Operator and MEVIA provides a method for single sign-on.

     

    MEVIA can accommodate 50-100 music channels broadcasting in SD, HD, or even 4K depending on the bandwidth that the operator may have available for this service.

     

    Now, an Operator will be able to offer a particular set of Spotify playlists to its subscribers and increase Spotify® music viewership by 2M subscribers.

     

    Similarly, VEVO has no cable TV product offering, MEVIA can enable both VOD and Linear Programming streaming from the same appliance using our caching and distribution network that has been put in place for Spotify®

    [spiderpowa-pdf src=”http://edwinhernandez.com/wp-content/uploads/2018/06/MEVIA-box-appliance.pdf”]MEVIA-box-appliance

    Case Study: SKY Brasil and SKY TUNES Application

     

    SKY BRASIL® created SKY TUNES,a product that was powered by MEVIA from 2013-2015.  MEVIA provided all OTT streams for thousands of customers in that part of the world. SKY TUNES mobile applications were downloaded by millions of subscribers in IOS and Android.   MEVIA provided to SKYTUNES APIs, streams, and playlists for the application, as well as analytics.

     

     

     


     

    Multimedia Ingest

    Ingest of media can be done to our cloud storage in our platform, by simply adding and dropping all the required music files in MP3, AAC, or AC-3 Formats.

    Movies can also be uploaded and ingested by accessing the storage and uploading all the required MP4, MPEG-2, or any other format encoded in any known video encoder, suchas H.264, H.265, MPEG2 Video, and many others.

     

     

    Mobile Apps for MEVIA

    MEVIA provides two middleware components, one for music content that was branded initially as “Mediamplify Music” and MEVIA Apps. The fist app is music-centric only, and is capable of handling thousands of music channels in linear format, including“keyword” seed stationscapability.  MEVIA is more video and music centric, in other words playback of video and music for IOS and Android. The sample implementations and can be customizedwith any additional branding or screens as the operator requires.

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     


    Patents and Trademarks

     

    Amplify your Reach ® is a registered trademark of EGLA COMMUNICATIONS

    US15/538,911 and PCT/US2015/067464 METHOD, SYSTEM, AND APPARATUS FOR MULTIMEDIA CONTENT DELIVERY TO CABLE TV AND SATELLITE OPERATORS

     US PATENT 7339493MULTIMEDIA CONTROLLER

     


    For a PDF version of this document: What is Cloud to Cable

     

  • An Augmented Reality Anemometer – First Update 

    An Augmented Reality Anemometer – First Update

    During the “Emerge Americas Hackathon 2018,” the theme was “Miami Resilience” and specially with all the hurricane  events in Florida a key element in those weather reports is the classic anemometer and wind speed measurements. As we all know, anything over “Tropical Storm” and Floridians prepare for the worst as our waters are so warm that anything can turn into a Category 3 – 5 hurricane very quickly.

    Stormchasers and weather channel meteorologist are indeed interested in taking a shot at measuring the storm with an anemometer.

    As we all recall, it is classical to see m any storm chasers holding an anemometer and measuring wind speed. In this figure, we have a hurricane-force winds of Irma, a devastating hurricane that passed by us in 2017.

    The idea with this project and the challenge was to measure wind speed when the only tool we have is the phone, indeed the phone can then analyze the video feed that is taken from the camera phone on the phone and compute the wind speed? Is this even possible?

    The answer is yes, by reviewing the video frames, I was able to identify a pattern and an index that varies from 0-5 and maps directly to the speed, not linearly but indeed a mapping exists.

    In order to reviews this process, we need to replace a “velocity anemometer” which could be mechanical or ultra-sound, for a video-powered anemometer, or even better an augmented reality device that basically augments the reality by providing the user a read on the hurricane or the wind speed.  For that, the main tool will be a server located locally in a computer that will be analyzing frame by frame of the images and creating a correlation function based on the number of squares that are computed after the threshold.

    This process and system has been filed for a provisional patent.

    [jwplayer mediaid=”23005″]

     

    I will be providing more updates as I get the sources in github and can complete doing other adjustments to the AR App.

    https://www.youtube.com/watch?v=IIs7Dr13sGc

     

    [slideshare id=95630514&doc=aranemometer01-180501220352]

  • Predicting Network Traffic using Radial-basis Function Neural Networks – Fractal Behavior

    Predicting Network Traffic using Radial-basis Function Neural Networks – Fractal Behavior

    I found a paper about Predicting Network Traffic using RBFNN.  I wrote this back in December 2011 regarding Radial-basis Function Neural Networks (RBFNN). Currently, new trends in artificial intelligence are key and RBF-Kernels are in use by machine learning methods and systems.

    ” Fractal time series can be predicted using radial basis function neural networks (RBFNN). We showed that RBFNN effectively predict the behavior of self-similar patterns for the cases where their degree of self-similarity (H) is close to the unity. In addition, we observed the failure of this method when predicting fractal series when H is 0.5. ”

     

    As Hurst-parameter is closer to 0.5 then RBFF are useless to predict fractal behavior, as shown, the randomness of a Hurst parameter at 0.5

    For the BRW (brown noise, 1/f²) one gets

    Hq = ½,

    and for pink noise (1/f)

            Hq = 0.

     

    Obviously, the Hurst Parameter or Hurst Exponent is nothing but a degree of  “fractality” for a data set. In General, we don’t expect to predict noise, there is no practical use of for this particular case.  we are using the Hurst parameter to see when the RBFF is capable of finding a right response to the data being captured or introduced to the set.

    Conclusions

    Fractal time series can be predicted using RBFNN when the degree of self-similarity, Hurst parameter, is around 0.9. The mean square error (MSE) of the real and predicted sequences was measured to be 0.36 as a minimum. Meanwhile, fractal series with H=0.5 cannot be predicted as well as the ones with higher values of H.

    It was expected that due the clustering process, a better approximation could be achieved using a greater value of M and small dimensionality, however behavior was not observed and in contrast, the performance had and optimal point at M=50 using d=2. This phenomena would require a deeper study and it is out of the scope of this class report.

    Future Work

    I am retaking this work and combining it with all BigData, this should be co-rrelated with RF and other systems and related research.

    Introduction Big Data in RF Analysis | Hadoop: Tutorial and BigData 


    PREDICTION OF FRACTAL TIME SERIES USING RADIAL BASIS FUNCTION NEURAL NETWORKS

    Fractal time series can be predicted using radial basis function neural networks (RBFNN). We showed that RBFNN effectively predict the behavior of self-similar patterns for the cases where their degree of self-similarity (H) is close to the unity. In addition, we observed the failure of this method when predicting fractal series when H is 0.5.

    Introduction

    We will first review the meaning of the term fractal. The concept of a fractal is most often associated with geometrical objects satisfying two criteria: self -similarity and fractional dimensionality. Self- similarity means that an object is composed of sub-units and sub-sub-units on multiple levels that (statistically) resemble the structure of the whole object. Mathematically, this property should hold on all scales. However, in the real world, there are necessarily lower and upper bounds over which such self-similar behavior applies. The second criterion for a fractal object is that it has a fractional dimension. This requirement distinguishes fractals from Euclidean objects, which have integer dimensions. As a simple example, a solid cube is self-similar since it can be divided into sub-units of 8 smaller solid cubes that resemble the large cube, and so on. However, the cube (despite its self- similarity) is not a fractal because it has an (=3) dimension. [1]

    The concept of a fractal structure, which lacks a characteristic length scale, can be extended to the analysis of complex temporal processes. However, a challenge in detecting and quantifying self-similar scaling in complex time series is the following: Although time series are usually plotted on a 2- dimensional surface, a time series actually involves two different physical variables. For example, in Figure 1. the horizontal axis represents “time,” while the vertical axis represents the value of the variable that changes over time. These two axes have independent physical units, minutes and bytes/sec respectively (For example). To determine if a 2-dimensional curve is self-similar, we can do the following test: (i) take a subset of the object and rescale it to the same size of the original object, using the same magnification factor for both its width and height; and then (ii) compare the statistical properties of the rescaled object with the original object. In contrast, to properly compare a subset of a time series with the original data set, we need two magnification factors (along the horizontal and vertical axes), since these two axes represent different physical variables.

    Fig. 1. Fractal time series

    In the different windows observed, hand h2, we can observe a linear dependency between the variances and windows sizes. In other words, the slope is determined by (log(s2) – log(s1))/(log(1) – log(h2)). This slope value is also called Hurst parameter (H) and in general a value of 0.5 indicates a completely brownian process, whereas 0.99 indicates highly fractal.

    The research conducted by Sally Floyd and Vern Paxon [2] concluded that network traffic is fractal in nature and H>0.6. Therefore, RBFNN could be used in this field for network traffic control and analysis. Indeed, we made use of Vern Paxson’s [3,4] method to generate a fractal trace based upon the fractional gaussian noise approximation. The inputs of the Paxson’s program developed are: media, variance, Hurst parameter, and the amound of data. We decided to maintain a media at zero, = 0, and the variance, σ2=1, and 65536 points. Figures 2 and 3, depict the fractal time series at different sampling windows

    Fig. 2. Fractal sequence sampled at different intervals H=0.5, =0 and s2 = 1

    Fig 2. depicts the generated sampled used for training and testing of the GBRF. The signal is composed of 65536 data samples, ranging between 4 and –4, although we only used 10000 points for training and 10000 points for testing. Similarly, Fig. 3 presents the histogram and fast Fourier transform corresponding to the input in Fig. 2.

    Fig. 3. Histogram and fast Fourier transform of the self-similar sequence H=0.5

    Fig. 4. Fractal sequence sampled at different intervals H=0.9, =0 and s2= 1

    In addition, Fig 4 and Fig 5 show the input data at H=0.9. Both plots show a big difference in the frequency domain among the time series with different values of H. This difference allow us to speculate that RBFNN will be able to perform much better than in the purely random case.

    Fig. 5 Histogram and fast Fourier transform of the self-similar sequence H=0.9

    Radial basis functions

    A radial basis function, like an spherical Gaussian, is a function which is symmetrical about a given mean or center point in a multi-dimensional space [5]. In the Radial Basis Function Neural Network (RBFNN) a number of hidden nodes with radial basis function activation functions are connected in a

    feed forward parallel architecture Fig 6.. The parameters associated with the radial basis functions are optimized during training. These parameter values are not necessarily the same throughout the network nor directly related to or constrained by the actual training vectors. When the training vectors are presumed to be accurate ie. Non-stochastic, and it is desirable to perform a smooth interpolation between them, then a linear combination of radial basis functions can be found which gives no error at the training vectors. The method of fitting radial basis functions to data, for function approximation, is closely related to distance-weighted regression. As the RBFNN is a general regression technique it is suitable for both function mapping and pattern recognition problems.



    Fig. 6. Radial basis function representation with k-outputs, M-clusters and d-inputs.

    The equation required by a Gaussian radial basis function (GRBF) equations are shown as follows:

    In all cases, ∈ {1, .. ,N}, or the number of patterns, while ∈ {1, .., K} or the number of outputs, and ∈ { 1, .., M} or the number of clusters used on the network.

    According to Bishop [6] the solution for the weight matrix is defined as follows:

    = Φ(Φ)+ T

    where all these matrices are defined by:

    = {Wkj }

    Φ= {Φ nj }, and Φ nj Φ (xn)

    = {Tnk }, Tnk nk

    And, finally, Y={Ynk }, Y=ΦWT

    Therefore, the weight matrix can be calculated with the formula:

    W Φ +T

    Since Φ is a non-squared matrix the pseudo inverse is required to calculate the matrix W.

    RBFNN and radial basis functions implemented.

    The input to the MATLAB code match up to a file generated by the fractal generator. The set of input data of the fractal file had to be rearranged and organized such that the number of inputs, D, stimulated M- GRBFs. The element D+1 of the sequence was considered as the output. Hence, each sequence of D-inputs will produce one output, which is can be arranged as follows:

    {xi }= {{x[n], x[1], x[ 2], x[ 3],…, x[ D]}

    This set {x i} determines the output tk, which is x[n+1]. This output is used for training of the RBFNN.

    Each term on the {xiinputs, generates a set of mI and j input values, where { 1..M}, and { 1.. N/ (D-1)}. The data is subdivided in N/(MX(D-1)) clusters of D-dimension from which mj and j are calculated. This calculation was done at the cluster of data by first sorting the data according to tkn or the expected outcome. By sorting the (xn} via tkn we will be able to cluster the input hence each independent basis function will represent a cluster of inputs which can generate a similar outcome.

    Hence, it would be expected to have a better predictable value for bigger values of M, or by decreasing the granularity of the cluster. For instance, with d=2, and M=100, given an training set N=3000, we will have a cluster j=1

    The cluster of size 10 will have a mand m2,, which are the medias of the 10 elements in the first and second columns respectively. The variance is determined using all the elements in the cluster, or both columns are rows. Hence, it would be expected that for a big cluster, or a small value of and a high- dimensionality this method lead to bigger error during the approximation.

    Results and experimental prediction using radial basis functions.

    Once the {Φ} matrix is determine, as well as the weight vector , we proceeded to test the RBFNN with some input data.

     

    Table 1. Variation of and the mean square error of the training sample at different Hurst parameters

    Degree of Self- similarity

    Hurst Parameter

    d-Dimension

    M- GBRFs

    0.9

    0.5

    2

    10

    0.396

    0.508

    20

    0.369

    0.507

    50

    0.349

    0.514

    100

    0.352

    0.521

    200

    0.727

    0.546

    4

    10

    0.430

    0.511

    20

    0.410

    0.512

    50

    0.380

    0.523

    100

    0.372

    0.537

    200

    0.384

    0.565

    8

    10

    0.501

    0.524

    20

    0.516

    0.533

    50

    0.466

    0.566

    100

    0.479

    0.597

    200

    0.562

    0.768

    16

    10

    0.561

    0.542

    20

    0.561

    0.575

    50

    0.676

    2.735

    100

    4.004

    29.107

    200

    435.043

    8.597

    We made use of a sequence, as big as the training input (10000 points). Table 1 depicts the results of the mean square error at different degrees of self-similarity as well as the number of hidden nodes or basis functions used (M).

    Fig 7. Error and comparison between predicted and real sampled signal for d=2 and M=50. Input signal for H=0.9, 10000 samples used for training

    All the input sequences were compared between the original and the predicted input. The best prediction and smaller mean square error (MSE) was observed with d=2, M=50 and H=0.9. This

    behavior can be shown also in the qualitative shape shown in Fig. 7. Where the predicted and real sampled data are very similar and the predicted data follows the real sequence. Although the magnitudes are missing, the RBFNN was able to produce a nice input.

    Fig.8 . Error and comparison between predicted and real sampled signal for d=16 and M=200. Input signal for H=0.9, 10000 samples used for training

    Meanwhile, Fig. 8, shows the results obtained with H=0.9, M=200, d=16 where we observe that there is

    over-estimation on the predicted sequence, which makes the error grow significantly. Those over estimations are not plotted in the figure but rounded between 10 to 20 in magnitude.

    Notwithstanding, the MSE seems to grow to unreasonable values, qualitatively the shape of the predicted sequence follows the real testing sample data.

    Fig 9. Error and comparison between predicted and real sampled signal for d=2 and M=20. Input signal for H=0.5, 10000 samples used for training

    Besides the test executed to the input sequence with H=0.9, Fig. 9 and Fig. 10, depict the behavior of the RBFNN under H=0.5 stimulation. Both plots show the poor performance of the RBFNN when this type of stimulation was employed. In fact, Table 1, presents that the minimum MSE was of 0.5, whereas with H=0.9 the minimum was around 0.3. We have to clarify that for each data set used the RBFNN was trained and its weight matrix calculated using a set of the same input pattern. The performance of the neural network was tested using a training pattern using the same as in the training set.

    As show in Fig. 10, the worst performance of the RBFNN was observed when using 16 inputs (d=16) to determine the a predicted pattern and M=100. Although, the error is higher than the MSE measured in Fig 9, qualitatively this shape seems to follow the real sequence used as input.

    Fig 10. Error and comparison between predicted and real sampled signal for d=16 and M=100. Input signal for H=0.5, 10000 samples used for training

    CONCLUSIONS

    Fractal time series can be predicted using RBFNN when the degree of self-similarity, Hurst parameter, is around 0.9. The mean square error (MSE) of the real and predicted sequences was measured to be 0.36 as a minimum. Meanwhile, fractal series with H=0.5 cannot be predicted as well as the ones with higher values of H.

    It was expected that due the clustering process, a better approximation could be achieved using a greater value of M and small dimensionality, however behavior was not observed and in contrast, the performance had and optimal point at M=50 using d=2. This phenomena would require a deeper study and it is out of the scope of this class report.

    APPENDICES

    MATLAB CODE USED FOR THE NEURAL NETWORK AND SELF SIMILAR TRACE PRE – PROCESSING

    % Fractal sequence processing
    % © 2001 – Edwin Hernandez
    % selfSimilar = input (‘ Input the name of the file with self-similar
    % content ‘);
    load selfSimilarH05; x_1 = 1:10;
    y_1 = size(10); x_2 = size(100); y_2 = size(100); x_3 = size(1000); y_3 = size(1000); x_4 = size(10000); y_4 = size(10000); j_1=1;
    j_2=1; j_3=1; j_4=1;
    for i=1:10000,
    if (mod(i, 1000) == 0) x_1(j_1) = i;
    y_1(j_1) = selfSimilarH05(i); j_1 = j_1 + 1;
    end
    if (mod(i, 100) == 0) x_2(j_2) = i;
    y_2(j_2) = selfSimilarH05(i); j_2 = j_2 + 1;
    end
    if (mod(i, 10) == 0) x_3(j_3) = i;
    y_3(j_3) = selfSimilarH05(i); j_3 = j_3 + 1;
    end
    x_4(i) = i;
    y_4(i) = selfSimilarH05(i); end

    subplot(2,2,1); plot(x_1, y_1);
    title(‘Sampled at 1000 sec’, ‘FontSize’, 8 );
    %xlabel(‘time (s)’,’FontSize’, 8 );
    ylabel(‘Data’,’FontSize’, 8 );

    subplot(2,2,2); plot(x_2, y_2);
    title(‘ Sampled at 100 sec’,’FontSize’, 8 );
    %xlabel(‘time (s)’,’FontSize’, 8 );
    ylabel(‘Data ‘,’FontSize’, 8 );

    subplot(2,2,3); plot(x_3, y_3);
    title(‘ Sampled at 10 sec’,’FontSize’, 8 ); xlabel(‘time (s)’,’FontSize’, 8 );
    ylabel(‘Data’,’FontSize’, 8 );
    subplot(2,2,4); plot(x_4, y_4);
    title(‘ Sampled at 1 sec’,’FontSize’, 8 ); xlabel(‘time (s)’,’FontSize’, 8 );
    ylabel(‘Data’,’FontSize’, 8 );

    pause
    subplot(2,1,2), plot(log(abs(fft(y_4, 1024))));
    title(‘ Fast fourier transform (1024 samples)’,’FontSize’, 8); ylabel(‘log10’, ‘FontSize’, 8);
    xlabel(‘frequency domain’, ‘FontSize’, 8);

    subplot(2,1,1), hist(y_4,100); xlabel(‘Data in 100 bins’,’FontSize’,8); ylabel(‘Samples’,’FontSize’, 8);
    title(‘ Histogram ‘,’FontSize’, 8);

    pause H=20
    for i=1:H-1,
    x(i) = size(round(10000/H)); end
    yk = size(round(10000/5));
    % 4 y 1 output to create Yk samples j=1;
    load selfSimilarH09; for i=1:H:10000,
    for k=0:H-2,
    x1(j) = selfSimilarH09(i+k); end
    yk(j) = selfSimilarH09(i+k+1); j=j+H;
    end

    subplot(5,1,1), plot(x1);
    subplot(5,1,2), plot(x2);
    subplot(5,1,3), plot(x3);
    subplot(5,1,4), plot(x4);
    subplot(5,1,5), plot(yk);
    % Gaussian radial basis functions
    % ——————————————————————–
    % Edwin Hernandez
    % Modified to sort the clusters and then find the Mu’s and the sigmas.
    % if M=100 I will sort all the clusters in 100 piles.

    D=16;
    M=200;
    load selfSimilarH09 NDATA = 10000;
    % get all the chunks and the T matrix
    % out of all the inputs only 65500 I’ll use k=1;
    x = size(round(NDATA/(D+1)), D); t = size(round(NDATA/(D+1)));

    for j=1:round(NDATA/(D+1)), for i=1:D,
    x(j,i) = selfSimilarH09(k);
    k=k+1; end k=k+1;
    t(j) = selfSimilarH09(k); end

    u = size(size(x), D+1); u = [x, t’];
    u = sortrows(u, D+1);

    x = u(1:size(x), 1:D);
    [R,C]=size(x);
    t = u(C*R+1:(C+1)*R)’;
    %pause;
    %cwd = pwd;
    %cd(tempdir);
    %pack
    %cd(cwd)

    L = size(t);
    cluster = floor(R*C/(M*D)); Mu = size(M, D);
    sigma = size(M); Mean = size(D,1); k=0;
    for j=1:M,
    if (j<M)
    z= x(k+1:j*cluster,1:D); k=j*cluster; [l,c]=size(z);
    sigma(j) = cov(z(1:l*c)); Mean = mean(z);
    %pause; else
    z= x(k+1:R,1:D);
    k=j*cluster; [l,c]=size(z);
    sigma(j) = cov(z(1:l*c));
    Mean =mean(z);
    end
    for i=1:D, Mu(j,i)=Mean(i);
    end end
    cwd = pwd;
    cd(tempdir); pack
    cd(cwd)

    Phi = size(M, round(NDATA/(D+1))); % M, GBRF ….
    for j=1:M,
    for k=1:round(NDATA/(D+1)), dist = 0;
    for i=1:D,
    dist = dist + (x(k, i) – Mu(j, i))^2; end
    Phi(j, k) = exp( -2*dist/(2*sigma(j))); end
    end

    cwd = pwd; cd(tempdir); pack
    cd(cwd)

    % Weight matrix . W = size(M, 1);
    W = pinv(Phi)’*t;

    x_test = size(round(NDATA/(D+1)), D); t_test = size(round(NDATA/(D+1)));

    k=NDATA+1;
    for j=1:round(NDATA/(D+1)), for i=1:D,
    x_test(j,i) = selfSimilarH09(k); k=k+1;
    end k=k+1;
    t_test(j) = selfSimilarH09(k); end

    error = size(round(NDATA/(D+1))); y = size(round(NDATA/(D+1))); Phi_out = size(M);
    meanSQRerror = 0
    for k=1:round(NDATA/(D+1)), for j=1:M,
    dist = 0; for i=1:D,
    dist = dist + (x_test(k, i) – Mu(j, i))^2; end
    Phi_out(j) = exp( -2*dist/(2*sigma(j)));
    end
    y(k) = Phi_out*W;
    error(k) = y(k) – t_test(k);
    meanSQRerror = 0.5*(y(k)-t_test(k))^2+meanSQRerror;
    if abs(y(k))>=5 y(k) = 5;
    end

    if abs(error(k))>=5 error(k)=5;
    end

    end

    fprintf(‘The mean square error is : %f’, meanSQRerror); c=round(NDATA/(D+1));
    subplot(2,1,1), plot(1:c, error);
    title(‘ Prediction error ‘,’FontSize’, 8);
    %subplot(3,1,2), hist(error, 100);
    %title(‘ Error histogram ‘, ‘FontSize’, 8); subplot(2,1,2), plot(1:c, t_test(1:c), ‘r:’,1:c, y); title(‘ Real and predicted Data ‘, ‘FontSize’, 8); legend(‘Real’,’predicted’);

    REFERENCES

    1. Peng C-K, Hausdorff JM, Goldberger Fractal Analysis Methods http://reylab.bidmc.harvard.edu/tutorial/DFA/node1.html
    2. Vern Paxson and Sally Floyd, Wide-Area Traffic: The Failure of Poisson ModelingIEEE/ACM Transactions on Networking, Vol. 3 No. 3, pp. 226-244, June 1995.
    3. Vern Paxson. Fast Approximation of Self-Similar Network Traffic. Technical Report LBL-36750, Lawrence Berkeley Labs, April 1995.
    4. Vern Paxson. http://ita.ee.lbl.gov/html/contrib/fft_fgn_c-readme.txt
    5. Radial Basis Functions. http://www.maths.uwa.edu.au/~rkealley/ann_all/node162.html
    6. Christopher Bishop “Neural networks for pattern recognition”, Oxford University Press, Birmingham, UK, 1995.

     


    [spiderpowa-pdf src=”http://edwinhernandez.com/wp-content/uploads/2017/08/GRBFF.pdf”]GRBFF

     

     

  • Music for Cable Re-Launch

    Music for Cable and CLOUD TO CABLE

    Sites Relaunched: cloudtocable.com, cloudforcable.com, musicforcable.com and ubiquicast.com

    Cloud to Cable enables a music or video streaming service to be delivered to CABSAT systems. Our platform can cover 10, 50, 100+ music or radio channels that are directly obtained using your HTML5 web interface and sent directly to a Cable TV System and to subscriber’s set top boxes.

    A “MediaPlug” server appliance is provisioned with our cloud-based VM with our proprietary software, connecting to a streaming service. As an example we have created  “MEVIA & Mediamplify Music” a Cable TV offering also available for Cable/Satellite systems. Read more

    Music for Cable TV is then as quickly as signing a partnership agreement with EGLA COMMUNICATIONS and creating a selection of stations for CABSAT. Licensing should be fast an easy to obtain directly from SoundExchange, BMI, ASCAP and other providers.

    We will power your music streaming service and protect all your content with appropriate Digital Rights Management (DRM) as suggested by the industry using encrypted and authenticated secured lines to our cloud or yours directly. Read more

    [spiderpowa-pdf src=”http://edwinhernandez.com/wp-content/uploads/2017/07/Music-for-Cable.pdf”]Music for Cable

    Cloud to Cable

    Sites:

    [spiderpowa-pdf src=”http://edwinhernandez.com/wp-content/uploads/2017/07/Mevia-Cloud-to-Cable-Music-TV-for-Cable.pdf”]Mevia – Cloud to Cable – Music, TV for Cable
  • Music Choice vs Stingray Digital – Case 2:16-cv-586-JRG-RSP

    Music Choice vs Stingray Digital – Case 2:16-cv-586-JRG-RSP

     vs. 

    We will discuss in the article, the judge’s order (Judge Roy Payne) and memorandum regarding all the claims terms and its construction. As expected, the judge went for:

    “[C]laims ‘must be read in view of the specification, of which they are a part.’” Id. (quoting Markman v. Westview Instruments, Inc., 52 F.3d 967, 979 (Fed. Cir. 1995) (en banc)). “[T]he specification ‘is always highly relevant to the claim construction analysis.

    As in many cases, this was also the case as well, here the order/memorandum available online:

    [spiderpowa-pdf src=”http://edwinhernandez.com/wp-content/uploads/2017/07/Music_Choice_v_Stingray_Digital_Group_Inc__txedce-16-00586__0145.0.pdf”]Music_Choice_v_Stingray_Digital_Group_Inc__txedce-16-00586__0145.0

    As shown, in all cases where Music Choice made a simple term definition, Judge Payne went for the simplest and more appropriate meaning to the words. Music Choice won pretty much all terms in their favor and in all “indefinite” arguments did not move an inch in favor of Stingray. Hence the judge also sided with Music Choice’s arguments and claim construction. For instance:

    • What was the goal on trying to interpret a Cable TV system as it if was not a digital system? I don’t really understand why Greenberg did not agree to this simple term?  And the judge sided with Music Choice: “Accordingly, the Court rejects Defendants’ proposed “not a digital network” and “signal” limitations and determines the transmission-system terms have their plain and ordinary meaning without the need for further construction.”   The claim recites a first transmission and a second transmission system,
    • The same thing with “multicast,” this is a well-known term in all Cable TV systems, where multicasting is used to transmit all Linear TV signals. “Accordingly, the Court rejects Defendants’ proposed “not a digital network” and “signal” limitations and determines the transmission-system terms have their plain and ordinary meaning without the need for further construction.”
    • A very similar analysis is found with the term “trigger message” where the judge sided with the same simple meaning as follows:” Accordingly, the Court construes “trigger message” as follows: “trigger message” means “message configured to initiate an action”
    • And you can find a very similar argument for most of the terms in dispute.

    All the evidence is sealed and there is no way to see exactly how these terms match the device in dispute, however, Music Choice’s attorneys should be prepared and if those terms were favorable to them, now one can asume that their evidence to match these terms is solid.

    We will keep track on this case and how this develops, on a different note, Music Choice also got hit by Stingray with several IPRs:

    Music Choice then has to defend the following IPR cases filed by Stingray Digital regarding this particular case:

    • Trial Number – IPR2017-01450
      Filing Date – 5/18/2017
      Patent # – 9,414,121
      Title – SYSTEMS AND METHODS FOR PROVIDING AN ON-DEMAND ENTERTAINMENT SERVICE
      Patent Owner –  MUSIC CHOICE
      Petitioner – Stingray Digital Group Inc.
      Tech Center – 2400
    • Trial Number – IPR2017-01192
      Filing Date – 3/31/2017
      Patent # – 8,769,602
      Title – SYSTEM AND METHOD FOR PROVIDING AN INTERACTIVE, VISUAL COMPLEMENT TO AN AUDIO PROGRAM
      Patent Owner –  MUSIC CHOICE
      Petitioner – Stingray Digital Group Inc.
      Tech Center – 2400
    • Trial Number – IPR2017-01191
      Filing Date – 3/30/2017
      Patent # – 9,351,045
      Title – SYSTEMS AND METHODS FOR PROVIDING A BROADCAST ENTERTAINMENT SERVICE AND AN ON-DEMAND ENTERTAINMENT SERVICE
      Patent Owner –  MUSIC CHOICE
      Petitioner – Stingray Digital Group Inc.
      Tech Center – 2400
    • And maybe others http://www.gbpatent.com/content/uploads/IPR.pdf

    For example we found:  http://ptolitigationcenter.com/2017/05/pto-litigation-report-may-19-2017/

     


    Disclosure: EGLA, which I own, provided a platform for DMX for digital music distribution. Stingray acquired DMX Music but not our technology and kept its own music delivery system, the infringing system now. However, EGLA owns a patented technology that is called “CLOUD to CABLE TV“ that enables delivery of linear music channels to Cable TV subscribers in a more clever, fault-tolerant, and efficient way than these patents disputed here. Source: http://edwinhernandez.com/2016/08/01/platform-nternet-tv-music/

     

    https://www.slideshare.net/edwinhm/egla-communications-cloud-to-cable-tv-licensing-proposal?qid=9a8ef586-78a7-45a5-b02f-fc6ad1e0a95a&v=&b=&from_search=1

    and,

     

     


    Patent for MediaPlug  for the Cloud to Cable TV – WIPO Format

    EGLA CORP has a patented technology, superior to all the patented technologies out there, that brings the Cloud -based systems and generated images for music and TV channels that can be overlapped.  The Cloud to Cable TV system provides:

    • A system to convert HTML5 to Video, MPEG-4 or MPEG2Video, or H.265
    • A fault-tolerant system for MVPD and MSO’s – Cable TV Systems
    • Streaming for M3U8, HTTP Streaming, and compatible with other technologies
    • Virtualized TV in a box system with Cloud

    The device is called MediaPlug and also contains other Management APIs, as well as a good implementation tested with:

    • Cisco
    • Ericsson
    • Huawei
    • and Many other multi-plexers

     

    Advantages over all other systems

    There is no dependency into any Set Top Box or DOCSIS 2.0, DOCSIS 3.0, or other MPEG frames or dependencies in changes to STB.

    All the systems, are fault-tolerant and enable great reliability and remote management system for all distribution devices

    Uses standard DSL/Cable Modem technologies with the system to deliver 50, 100, 200 music channels, and 10-20 HD/SD/4K TV Channels.

    Many more advantages that are benefit DRM security, provisioning, and tracking for media playback.

     


    [spiderpowa-pdf src=”http://edwinhernandez.com/wp-content/uploads/2017/07/WO2016106360.pdf”]WO2016106360, however the right set of sighted is attached and is corrected in the US/Europe and other applications.
    Cloud to Cable TV White paper [spiderpowa-pdf src=”http://edwinhernandez.com/wp-content/uploads/2017/07/cloudtocable_whitepaperl.pdf”]cloudtocable_whitepaperl