• What is 5G?

    What is 5G? I am currently in the process of writing a book and will be working on some ideas in my blog.  My book  will be called “Viva5g” and in my book I will have several series one for  for “Entrepreneurs and Executives” and another series of books written into more technical depth for engineers and experts.

    Where is 5G derived from? Answer: Standard Bodies

    5G is what is called the fifth generation wireless communication technology or standard. The current wireless communication standard is called 4G or the fourth generation.   Therefore, there was a third and a second generation, or 2G and 3G? Who wouldn’t recall purchasing an iPhone 3S with 3G enabled?All of these wireless communications generations are nothing but ways for carriers to marry or meet and match certain “terminology” to a set of standard or standards, and consumers to identify what they are getting into.  Not only consumers, but investors, media, and even their own workforce.   It is a bit complicated to make precise mappings, but I will try to explain it in the following paragraphs.First, you may have seen “5GE” at the top right corner of your AT&T iPhone? If not, this picture depicts mine.Why is it 5G now and my iPhone is a X or iPhone 11?What happens is that behind the scenes, there is an organization called the “3GPP” or the “Third Generation Partnership Project,” that establishes and deals with very complex standards.  The 3GPP groups is composed of other standard bodies and all boils down to a group of companies that are 3GPP “members.”  This club or “members” get  together in multiple committees meet with the purpose to define, literally, to define how things will work or operate into the future.Some people may say on youtube videos that 5G is the devil and may even hint that the 3GPP is part of the “new world order” and is here to control us all, the answer is no!!!  That’s just fiction and conspiracy theories from youtubeers.   3GPP is just a group of companies and several other standard development organizations worldwide that have built or developed ways to improve and make things better, faster, higher-performance, and meet to agree on how to get this process done.Let’s start with 4G or the fourth generation wireless network,  which we already use, or you maybe using to read in this posting.4G corresponds to a mapping made to the 3GPP organization via a set of “Technical Specifications” has labeled as  Releases 8, 9, 10, and 11, and maybe 12, and 13,  whereas 5G corresponds to a set of “Technical Specifications” improved and changed that are labeled Release 15 and 16.For example, 3GPP Release 14  includes many new concepts not found din Release 8, those are: Internet Of Things, Vehicle-to-Everything, Radio Improvements, etc, as shown in the following screen shot.rel 14 image2In between those release, let’s sy Release 15 and Release 11, we find a  gray area where 4G ends and 5G begins.  In fact, many features from 5G are found in releases that are supposedly part of 4G, as well as new features for 5G appear in newer releases only,.  That’s is why some companies like AT&T have called this process “5GE,” and presented us with a “logo” or an icon, that is shown  int your iPhone’s screen.This simple icon caused Verizon to file a lawsuit to AT&?T for doing so and t is not clear what 3GPP releases AT&T refers to 5GE..Going in to more detail, the 3GPP organization defines itself in the 3gpp.org website as follows :
    The 3rd Generation Partnership Project (3GPP) unites [Seven] telecommunications standard development organizations (ARIB, ATIS, CCSA, ETSI, TSDSI, TTA, TTC), known as “Organizational Partners” and provides their members with a stable environment to produce the Reports and Specifications that define 3GPP technologies.
    In fact, the communications standards created by the 3GPP cover multiple technologies for instance: Radio Access Networks (RAN), Services & Systems Aspects (SA), and Core Networks & Terminals (CT), as well as many other aspects.3GPP is hence, as you can imagine, a complex body and I won’t go into more details. In my opinion, one of the main things that are distinguished by 4G and 5G is are the core components, handover, use of frequencies, and physical layers, more importantly the “softwarezation” and the use of software-based technologies at a much higher frequency band, which leads to higher bit rates.In essence, in today’s 4G and 5G core networks a concept has been introduced and it is called, “Network Function Virtualization” or NFV, which brings the cloud to telecommunications systems together, making all the changes in software, and much less in hardware. Obviously, there are servers, CPUs, GPUs, and all that working to manage all signals, but that is multi-purpose and can be easily upgradable, as in the past, it was not.

    Cloud Computing and Open Networks

    As you may expect, NFV brings as a main feature, the virtualization of all network components done in multiple instances or containers, which is popularly called, the cloud.
    It’s easy to confuse virtualization and cloud, particularly because they both revolve around creating useful environments from abstract resources.RedHat.com site
    I agree with RedHat’s comments, however, virtualization is what has made all cloud computing concepts possible, and it is clear that has come to optimize and improve 4G and hence 5G systems.Therefore as the cloud makes its way to 4G and more to 5G, we will find new terms that were not part of telco’s, one of those is “Open Network.” Fir instance, “Open Mobile Evolved Core,” or an open “Core” network, which is not necessarily “open source” but what that means is that APIs are used just like in any cloud-based environment, and things or components can be easily interconnected.Therefore, under this model, all 4G and 5G network components and hence the entire 5G network runs in the cloud with servers executing multiple instances of machines or containers.For example, there are any standardized 4G components that include Mobility Management Entity (MME), Service Gateway Control (SGW-C), Packet Gateway Control (PGW-C), Policy Charging Rules Function (PCRF), among some others, those now run as server instances.For instance, the MME or Mobility Management Entity, is a server or cloud of servers entities that handles mobility and tracks the mobile terminal in the network, assisting the UE or User Equipment, or your mobile phone with handover and selecting the right cell to move to, as it travels around a different geographical area.In this picture taken from the “Open Mobile Evolved Core,” we cab read that all of these components may all reside in one server, and the network created is virtual, or a software-defined network.

    What is then Network Function Virtualization?

    In essence, Network Function Virtualization (NFV) is nothing but a way to put all these network elements or components in servers or virtual machines. These virtual machines run in standard VMWare servers or Docker Instances, and now you can introduce standard cloud computing concepts and tools like Open Stack, or Kubernettes, for what is called “Orchestration” or the process of creating and making instances.
    Orchestration is the automated configuration, coordination, and management of computer systems and software. A number of tools exist for automation of server configuration and management, including Ansible, Puppet, Salt, Terraform, and AWS CloudFormation. Wikipedia
    These virtual machines are the main fabric of the “cloud,” the “cloud” is a set of machines or virtual machines that reside, ultimately still in datacenter and servers, somewhere, but are sufficiently operational from a “File” or an “Image” that can be copied into multiple data centers and operate without issues. These images or files are stored as “containers” or “virtual machines” that are executed in a real or hardware machine, that performs certain task.

    Edge Computing

    Another concept that has existed among  “computer science”  professionals for many decades,  is: moving computation to the “edge” of the network. It seems quite obvious, but it is not.Edge computing is a natural evolution to optimize processes.Historically,  in the past,  mainframe computers controlled everything with a dumb terminal  that displayed what the mainframe processed. Later on,  the PC or a Personal Computer evolved and all control was passed down to the PC and some servers, the use of mainframes becomes less important. As progress created the internet, all moved to web and cloud. Now, the control has been passed on to the cloud or a distributed computing system governs what we do and how we do it.  Hence, the closer computation is being moved, in proximity your resources are to you, the better, lower latency, falser responses, but also causes problems for the overall system to maintain authentication, caching, and other dependencies.For example,  we use “Edge” computing every day, in “content delivery networks” (CDN) that are used by NETFLIX or HULU to stream movies to thousand or millions of homes.  The CDN’s r main goal is to move content, music or video files, closer to your local internet link by making multiple copies available closer to the consumer or to the “edge” of a network.This concept is practical when all components use the same protocol, in this case IP or the Internet Protocol. The cloud, and an “ALL-IP” network architecture is found in 4G already, and subsequently is found in 5G.Therefore edge computing is now a more formalized concept, feasible and practical.The major evolution from 3G to 4G is that “all” components in 4G including the core infrastructure in 4G (and obviously in 5G) runs over internet protocols or IP. This evolution is a a major distinction between UMTS, CDMA, and older systems with 4th generation wireless networks.In 5G, IP is also the main fabric for communications, and all signaling data, voice, videos, are no longer using proprietary signaling but “IP.”  In the past, UMTS or GPRS, IP was an afterthought and was emulated on top of those proprietary protocols which made the networks slow and expensive to maintain.  Obviously, challenges surfaced as 3G moved to 4G including changes in handover protocols, billing, and access to the network.
    These proprietary protocols in 2G and 3G were designed to operate “Circuit Switched Networks” and 4G is an all “IP” network designed to operate in a 100% Packet-based Network.
    Edge Computing
    Edge Computing works well with all IP networks and now the way to go. As computation is moves to the “Edge” and as the “Edge” of the network is now a server or a cloud-based component of a a bigger cloud, many new ideas are being formulated and network and computation blend in as what is called today a “Software Defined Network.” In other words, the network is defined virtually by software and multiple networks can be created using the same physical interconnections.
    Making the cloud cool again
    Edge Computing is :
    Edge computing is a distributed computing paradigm which brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth.
    Edge Computing is now a major area of innovation, for example “Cloud to Cable,” my own patented technology is an “Edge Computing” entity that facilitates caching, which is data storage, and computation which is covered by my own patents, closed to the cable operator. However, the same is true for a 4G and 5G system. I am personally working on how to achieve that at EGLA Research Labs. As a consequence of the use of the cloud, organizations like the “Open Network Foundation” or ONF and others, are looking for ways to standardize how this is done.
    The Open Networking Foundation (ONF) is a nonprofit trade organization, funded by companies such as Deutsche TelekomFacebookGoogleMicrosoftVerizon, and Yahoo! aimed at promoting networking through software-defined networking (SDN) and standardizing the OpenFlow protocol and related technologies.[2] The standards-setting and SDN-promotion group was formed out of recognition that cloud computing will blur the distinctions between computers and networks.[3] The initiative was meant to speed innovation through simple software changes in telecommunications networks, wireless networks, data centers and other networking areas.[4]
    Obviously, since now the cloud is powering 4G and 5G, the same standardization and SDN with protocols like “OpenFlow” are now plausible to be used in the network infrastructure of 5G systems. In fact, the fabric of a Software Defined Network, has always been of my own use at EGLA since 2014, when we moved into an “Equinix” data center with the first version of the Mediamplify platform.

    Network Slicing and Beamforming

    Another concept that has been introduced mostly with 5G is beam forming and network slicing.Network slicing is used to assign an IP Address or a network for your own company or a user, this enhances quality of service to what a user has paid or a company is paid. A separation of virtual networks within the core network is being done for purposes of individualized routing and treatment of user’s traffic.
    Network slicing is the separation of multiple virtual networks that operate on the same physical hardware for different applications, services or purposes.
    Similarly, as part of the network is “Sliced,” the RF or Radio Frequency or wireless signals are now set to multiple beams.In a way, 5G operates at a high-bit rate, up to Gigabits per second, but at much higher frequency bands, sub-6 GHz or over 6 GHz. At these frequency bands, propagation of signals and physics brings the size covered by a base station to a smaller footprint. In other words, the power levels and noise are not appropriate to establish a link at 1km but are great at 100m, for example, and as opposed to 4G, a sector instead of covering a wide area, covers a few meters of wide. According to Metaswitch site:
    “Due to the high propagation loss of the millimeter wavelengths (mmWaves) employed in 5G new radio (5G NR) systems, plus the high bandwidth demands of users, beamforming techniques and massive Multiple Input and Multiple Output (MIMO) are critical for increasing spectral efficiencies and  providing cost-effective, reliable coverage.”
    Hence, signals are sent from multiple antennas (MIMO) and received by multiple antennas at the phone. This is already being achieved in 4G, at a smaller level, with a a technology called “Carrier Aggregation.”
    5G Demonstration at MWC 2019
    As expected, now that all operations and network is based on software and running on server and virtual machines, just like google cloud, Amazon’s cloud, Azure, Digital Ocean, and EGLA CORP’s cloud-based servers, what can stop “Artificial Intelligence” from being used? The answer is nothing, Artificial Intelligence or AI has been incorporated to work with the network.AI, Machine Learning, and other methods are used for network optimization and use radio resources better, to optimize power management and decrease electricity bills, frequency reuse at the radio-level, handover optimization, and network management with predictive failure detection.The machine learning mechanisms in existence today, can learn from large amount of data logs collected by the telcos’ and are perfectly suitable for cellular networks. The cellular network adapt and generate more data for thousands or hundreds of thousands of base stations that are deployed with millions of users in phones, IOT (Internet of Things) devices, and connected vehicles. The possibilities are endless.Here some AI examples in telcos of what I posted in my show, TECHED.TV.
    https://www.youtube.com/watch?v=ZFO5Z3fRCZI

    Low Latency and High Bandwidth

    Low Latency is now another factor of great importance for 5G. Low-latency makes robotics and self-driving cars possible. Before, latency was 800ms or a few seconds, let’s in GPRS and CDMA 2000, even UMTS provided latencies of 200-300ms.
    Network speeds in 5G will be in the Gigabits per Second
    High Bandwidth as expected would be in the order of Gbps or Gigabit per Second.
    5G Speeds
    LTE decreased latency to a few tens of milliseconds, but remember that you have to connect to the internet and account for all signaling, which provided an overall latency of 40-60ms, still unsuitable for remote robotics.
    Robotics enabled using 5G
    Since the access or network is sliced, a portion of that network could be allocated to be higher priority and hence decrease latency of the overall access to a few milliseconds, which is now perfectly suitable for robotics and automation, or even self-driving cars, AR, and 5G Gaming.
    5G Gaming
  • Global Mobile Awards – Judge for Mobile World Congress 2020

    Dr. Edwin A. Hernandez will be attending a judge for the Global Mobile Awards at the Mobile World Congress 2020 in Barcelona, Spain.
    Stay tuned for coverage from TECHED.tv

  • Introduction Big Data in RF Analysis

    Big Data in RF Analysis

    Big Data provides tools and a framework to analyze data, in fact, large amounts of data. Radio Frequency, RF, provides  amounts of information that depending n how it is modeled or created, its analysis fits many statistical models and is in general  predicted using passive filtering techniques.

    The main tools for Big Data include statistical aggregation functions,  learning algorithms, and the use of tools. There are many that can be purchased but many that are free but may require certain level of software engineering.  I love Python and specially the main modules used in python are:

    • Pandas
    • SciPy
    • NumPy
    • SKLearn

    and, there are many more used for the analysis and post-processing of RF captures.

    Drive Test and Data Simulation

    In general, many drive test tools are used to capture RF data form LTE/4G, and many other systems. As vendors, we can find Spirent, and many others, and we can capture RF from multiple base stations and map those to Lat/Long in a particular area covered by many base stations.  It’s obvious that drive test cannot cover the entire area, as  expected extrapolation and statistical models are required to complete the drive test.

    In a simulator, just as in MobileCDS and other simulators, specially those in “Ray Tracing,” the simulator uses electromagnetic models to compute the RF received by an antenna.

     

    Big Data Processing for a Massive Simulation

    Unstructured data models are loaded with KML and other 3D simulation systems that include polygons and buildings that are situated on top of a google earth map or any other map vendor.  The intersection of the model with the 3D database produces the propagation model that needs massive data processing, Map-Reduce and Hadoop to handle the simulation.

    HADOOP and MAP Reduce for RF Processing

    The data is then stored in unstructured models with RF information, that include the Electromagnetic field, frequency, time, delay, error, and other parameters that are mapped to each Lat/Log or x,y, z coordinates in the plane being modeled.  The tools are usually written in Python and parallelization can be done in multiple hadoop nodes and processing of CSV/TXT files with all the electromagnetic data and the 3D map being rendered.

     

    As you can see the Hadoop/GlusterFS is our choice, as we don’t see that much value for HDFS or the Hadoop Data File System are the ones that handle all the files and worker systems.  As you can tell, we are fans of GlusterFS and processing of all Hadoop cluster nodes is managed in a massive processing network of high-performance networks and 10Gb Fiber network.

    Big Data models: OLTP and OLAP  Processing

    The OLTP and OLAP data models definitions can be found online:

    ” – OLTP (On-line Transaction Processing) is characterized by a large number of short on-line transactions (INSERT, UPDATE, DELETE). The main emphasis for OLTP systems is put on very fast query processing, maintaining data integrity in multi-access environments and an effectiveness measured by number of transactions per second. In OLTP database there is detailed and current data, and schema used to store transactional databases is the entity model (usually 3NF).

     

    – OLAP (On-line Analytical Processing) is characterized by relatively low volume of transactions. Queries are often very complex and involve aggregations. For OLAP systems a response time is an effectiveness measure. OLAP applications are widely used by Data Mining techniques. In OLAP database there is aggregated, historical data, stored in multi-dimensional schemas (usually star schema). “

    Conclusion

    We have different research areas:

    • Analysis of data for handover protocols,
    • Data mining for better antenna positioning,
    • Machine learning techniques for better PCRF polices and more