article_id
int64 6
10.2M
| title
stringlengths 6
181
| content
stringlengths 1.17k
62.1k
| excerpt
stringlengths 7
938
| categories
stringclasses 18
values | tags
stringlengths 2
806
| author_name
stringclasses 605
values | publish_date
stringdate 2012-05-21 07:44:37
2025-07-11 00:01:12
| publication_year
stringdate 2012-01-01 00:00:00
2025-01-01 00:00:00
| word_count
int64 200
9.08k
| keywords
stringlengths 38
944
| extracted_tech_keywords
stringlengths 32
191
| url
stringlengths 43
244
| complexity_score
int64 1
4
| technical_depth
int64 2
10
| industry_relevance_score
int64 0
7
| has_code_examples
bool 2
classes | has_tutorial_content
bool 2
classes | is_research_content
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10,136,562
|
China Unveils New Draft Regulation to Regulate AI Generated Content
|
China is actively taking steps towards increasing transparency around AI generated information being rolled out on the internet. The Cyberspace Administration of China (CAC), the country’s National Internet Regulator, recently released a draft regulation that includes labeling instructions for AI-generated content. The regulation, titled as AI-Generated Synthetic Content Labeling Measures (人工智能生成合成内容标识办法(征求意见稿), targets on providers of AI-generated text, images, audio, and video to enhance transparency. This draft takes inspiration from laws such as the Cybersecurity Law and AI Service Management Provisions. The goal is to introduce a unified standard for AI related content moderation and reduce the growing amount of misinformation and deepfakes on the internet. Explicit Labels: Visible marks such as disclaimers or watermarks must be placed on AI-generated text, images, audio, and video. For example, AI-generated videos need clear marks on the opening frame, while text must display disclaimers at appropriate points. Implicit Labels: Hidden data, such as metadata or watermarks, must be embedded in AI-generated files. These markers contain information such as the content’s source, AI service provider, and a unique identifier. Implicit labels are not immediately visible but can be detected by platforms and authorities to verify content authenticity. Implementing these regulations come with a financial barrier. Platforms like Xiaohongshu, Bilibili, Weibo, Douyin, and Kuaishou have implemented AI content declarations, but the same would be tough for smaller firms. This step is also an effort towards national and public security in China. China vs India Beijing has always remained cautious of emerging technologies, and a step ahead in regulating them, especially when compared to the U.S or E.U. The director of CAC, Zhuang Rongwen, is an influential figure not just in China, but also in the West. Part of the Time’s 100 list released last month, Rongwen has consistently worked to implement the values of the Chinese Communist Party (CCP), and has actively ensured the country’s sway in the growing GenAI race against the West. When ChatGPT launched in 2022, China introduced legislation the following year to control the explosion of AI wherein companies would require government approval before deploying models publicly. “China is very much ahead of the game in terms of self-regulating AI within their own nation-state,” said Sen. Mark Warner, in an interview to Politico last year on how China leads the world on AI rules, leaving the rest behind. In India, earlier this year, MeiTy issued an advisory (which was later revised) on the labeling of AI related content online – which came in after the Gemini AI fiasco. Previously, the advisory also included companies to seek government approval before launching new models as well, which received a lot of criticism, stating that it would hinder innovation.
|
Meanwhile, India’s AI legislations are yet to be drafted, but would hopefully include watermarks and labels on AI-generated content.
|
["AI News"]
|
["Generative AI"]
|
Aditi Suresh
|
2024-09-24T20:44:07
|
2024
| 444
|
["Go", "ChatGPT", "GenAI", "AWS", "AI", "innovation", "GPT", "Aim", "Generative AI", "R", "AI-generated content"]
|
["AI", "GenAI", "ChatGPT", "Aim", "AWS", "R", "Go", "GPT", "innovation", "AI-generated content"]
|
https://analyticsindiamag.com/ai-news-updates/china-unveils-new-draft-regulation-to-regulate-ai-generated-content/
| 2
| 10
| 0
| false
| true
| false
|
10,054,542
|
Microsoft Open Sources This “Mixture of Experts” Models Library
|
Tutel is a library from Microsoft that enables building mixture of experts (MoE) models – a subset of large-scale AI models. Tutel is open source and has been included in fairseq, one of Facebook’s PyTorch toolkits, to enable developers across AI disciplines. Microsoft’s Ownership of MoE MoE is composed of small clusters of “neurons” that are activated only under very precise conditions. Lower “layers” of the MoE model extract features, which specialists then evaluate. For instance, MoEs can develop a translation system, with each expert cluster learning to handle a distinct chunk of speech or grammatical norm. Deep learning architecture MoE has a computational cost that is less than the number of parameters, making scalability easy. MoEs have different advantages over other model architectures. They can specialise in response to situations, allowing the model to exhibit a broader range of behaviours. Indeed, MoE is one of the few methodologies proved to scale to over a trillion parameters, paving the door for models to power computer vision, speech recognition, natural language processing, and machine translation systems. Parameters are the components of a machine learning model that are learned from historical training data. The association between factors and sophistication has generally held up well, particularly in the language domain. Tutel Features Tutel is primarily concerned with optimising MoE-specific computing. The library is optimised, in particular, for Microsoft’s new Azure NDm A100 v4 series instances, which offer a sliding scale of NVIDIA A100 GPUs. In addition, Tutel features a “simple” interface designed to facilitate integration with other MoE systems, according to Microsoft. Alternatively, developers can leverage the Tutel interface to include standalone MoE layers directly into their DNN models. Tutel’s comprehensive and adaptable MoE algorithmic support enables developers working in various AI disciplines to perform MoE more quickly and efficiently. Its high compatibility and extensive feature set ensure optimal performance when dealing with the Azure NDm A100 v4 cluster. Tutel is a free and open-source project that has been integrated into fairseq. Optimisations to Tutel’s MOE Tutel is a complement to previous high-level MoE solutions such as fairseq and FastMoE. It focuses on optimising MoE-specific computation and all-to-all communication and providing diverse and adaptable algorithmic MoE support. Tutel’s user interface is straightforward, making it simple to combine with other MoE systems. Alternatively, developers can use the Tutel interface to embed independent MoE layers directly into their own DNN models, gaining immediate access to highly optimised state-of-the-art MoE capabilities. Computations for the MoE Due to a lack of efficient implementations, MoE-based DNN models construct the MoE computation using a naive mixture of numerous off-the-shelf DNN operators given by deep learning frameworks such as PyTorch and TensorFlow. Due to redundant computing, this method incurs large performance overheads. Tutel develops and implements several highly efficient GPU kernels that provide operators for MoE-specific computation. In addition, Tutel will actively integrate emerging machine learning algorithms from the open-source community. Conclusion Microsoft is particularly interested in MoE because it makes efficient use of hardware. Computing power is only used by professionals with the specialised knowledge required to address a problem. The remainder of the model patiently awaits their turn, which increases efficiency. Microsoft demonstrates its commitment by launching Tutel, an open-source library for constructing models of equivalence. According to Microsoft, the Tutel programme helps developers expedite MoE models’ operation and maximise hardware use efficiency. MoE offers holistic training through techniques from various disciplines. Tutel has a considerable advantage over the fairseq architecture, as proved by researchers. It has also been incorporated into the DeepSpeed architecture, which benefits Azure services.To know more about Tutel, read here.
|
Tutel is an implementation of the mixture-of-experts technique for large-scale DNN model training.
|
["Global Tech"]
|
["AI Models", "Deep Learning", "Microsoft", "Natural Language Processing", "Tensorflow"]
|
Dr. Nivash Jeevanandam
|
2021-12-01T10:00:00
|
2021
| 596
|
["AI Models", "Go", "machine learning", "AI", "PyTorch", "Natural Language Processing", "R", "computer vision", "RAG", "deep learning", "Deep Learning", "TensorFlow", "Azure", "Microsoft", "Tensorflow"]
|
["AI", "machine learning", "deep learning", "computer vision", "TensorFlow", "PyTorch", "RAG", "Azure", "R", "Go"]
|
https://analyticsindiamag.com/global-tech/tutel-mixture-of-experts-technique/
| 3
| 10
| 0
| false
| false
| false
|
10,060,193
|
Odisha students’ moonshot rover wins third prize in NASA’s HERC challenge
|
Ten students from Odisha secured the third position in the high school division of NASA Human Exploration Rover Challenge (HERC) 2021. The posse became India’s 1st U-19 interdisciplinary team to be selected for NASA HERC 2021. The students–aged between 14 to 19–were selected to make a rover named NaPSAT 1.0.NASA’s HERC aligns with the Artemis mission to explore the Moon by 2024, and the rover made by the students could be part of the mission.Two US student teams won first and second places in the challenge. The India students are from Young Tinker and Navonmesh Prasar foundation with Anil Pradhan and Vaishali Sharma as their mission directors. The team behind NaPST 1.0 included 1.Rishikesh Amit Nayak (student lead) 2.Anjishnu Pattanayak(management ) 3. Shreyansh Vikas Mishra( Ergonomics) 4. Kanuri Varshini( presenter) 5. Nitesh Patnaik ( designer) 6. Kailash Chandra Barik (maker) 7. Danda Pani Patra (welder) 8. Rina Bagha (welder) 9. Tanvi Mallick ( media) 10. Ankan Mondal ( brakes and suspension) From Cycle mechanic to product developer Kailash Chandra Barik worked as a cycle mechanic in his father’s shop in his spare time. He is an ITI student at the Skill Development Institute in Bhubaneswar. “I was encouraged to apply for Young Tinker Academy and was selected with a scholarship. Later, with the help of STEM education and training, I became part of the NASA HERC 2021. We were supposed to go to NASA with our rover, but we were unable to visit due to the pandemic and were announced winners online. So, now I work as a product developer at Young Tinker,” he said. From roadside cycle mechanic to Product Developer. Meet the 20-year old Kailash Barik, my student turned team member. And this is his story….Kailash was identified by my team in 2020 when we were selecting a 10-membered Under-19 team for NASA Rover Challenge 2021. pic.twitter.com/q0hPpYQkVn— Anil Pradhan (@AnilPradhanEdu) January 30, 2022 What is NASA’s HERC? Each year, NASA (HERC) throws down an engineering design challenge to engage students across the globe to push space exploration. HERC challenges high school and college students to build a vehicle designed to traverse the simulated surface of another world. NASA is planning to send the first woman to the Moon in 2024. Like Artemis, NASA’s HERC will send an astronauts’ team to discover unknown terrains of the Moon. In addition, lunar science on the surface of the Moon will be executed by 2024 with polar and nonpolar landers and rovers to explore areas not investigated by the Apollo mission. Innovations abound The competition emphasises designing, constructing and testing technologies, including tools, mobility devices and traversing in unique environments. The Artemis program will prepare the students for the Moon. The rover built by the Indian team can navigate different terrains. “The rover is working perfectly as the team’s various innovations helped in building the rover,” the team said. The team used a three gears system instead of the usual two gears in their crank arm system. The team hit upon this idea while researching the previous rovers of the competition and found chain slacking was a common problem. To solve this, they developed an innovative three gear system. For the steering system, the team used a triangular plate for proper force distribution instead of the usual bars used in the bar-linkage system. These stand out innovations had helped the Indian team to finish third in NASA’s global challenge. The Indian students also won an award in the video category for their presentation of the rover-making video.
|
Each year, NASA (HERC) throws down an engineering design challenge to engage students across the globe to push space exploration.
|
["AI Features"]
|
["Interviews and Discussions", "NASA", "odisha"]
|
Poornima Nataraj
|
2022-02-09T18:00:00
|
2022
| 585
|
["Go", "NASA", "odisha", "AI", "programming_languages:R", "innovation", "programming_languages:Go", "RAG", "R", "Interviews and Discussions"]
|
["AI", "RAG", "R", "Go", "innovation", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-features/odisha-students-moonshot-rover-wins-third-prize-in-nasas-herc-challenge/
| 3
| 7
| 0
| false
| false
| false
|
10,067,720
|
Build your first feature store with Feast
|
A feature store keeps track of frequently used features. Feature Stores have evolved into a critical component of data infrastructure for machine learning systems. They handle the whole lifetime of features, from training multiple models to giving low-latency access to features for model inference by online applications. This article introduces Feast, a data management system for managing and providing machine learning features. With features provided by Feast, we will cover here how to build our own feature store. Following are the topics to be covered. Table of contents What is a Feature store?What kind of Feature stores are available?Why use the Feature store?Are there any drawbacks to the feature store?Creating a Feature store using Feast Let’s start with a high-level idea of a feature store. What is a Feature store? Simply put, a feature store is a tool used for storing commonly used features. Typically, when data scientists develop features for a machine learning model, those features can be added to the feature store so that it can be reused later. The process of developing features is also known as feature engineering, and it is a complex but necessary component of any machine learning process. Better features equal better models, which equals a better business outcome. The Feature Store for machine learning is a feature computation and storage service that allows features to be registered, found, and utilized in both ML pipelines and online applications for model inference. Feature Stores are frequently required to store huge amounts of feature data while also providing low-latency access to features for online applications. Producing a pipeline for generating a new feature is only one component of the labour involved in creating a new feature. To get to that point, undoubtedly you need to go through a long process of trial and error with a wide range of features until it is satisfactory. Then it is needed to be computed and saved as part of an operational pipeline, which varies based on whether the feature is online or offline. A feature store, on the other hand, is more than just a data layer; it is also a data transformation service that allows customers to change raw data and store it as features that can be utilized by any machine learning model. Analytics India Magazine Are you looking for a complete repository of Python libraries used in data science, check out here. Are there any types of Feature stores? A feature store is often built as a dual-database system, with a low latency online feature store (typically a key-value store or real-time database) and a scale-out SQL database for storing huge amounts of feature data for training and batch applications. There are two ways a feature store can be utilized. Offline – Some characteristics are computed offline as part of a batch task. For instance, average monthly spending. They are mostly employed by offline processes. Because of their nature, these traits might take some time to develop. Offline features are often generated using frameworks like Spark or by conducting SQL queries against a given database and then utilizing a batch inference procedure.Online – These characteristics are a little more challenging since they must be computed quickly and are frequently served in milliseconds. Calculating a z-score, for example, for real-time fraud detection. In this scenario, the pipeline is created in real-time by computing the mean and standard deviation across a sliding window. These computations are substantially more difficult, needing both quick computation and fast data access. The information can be kept in memory or a rapid key-value database. Why use the Feature store? Reduced development time Data engineering setups are frequently seen as a spun-out process. Some attributes are difficult to compute and necessitate constructing aggregation, whilst others are rather simple. As a result, the idea behind a feature store is to abstract all of those engineering layers and make it simple to read and write features. Working with a feature store abstracts this layer so that when a developer is looking for a feature, he or she may use a simple API to retrieve the data instead of writing technical code. Model deployment in production is smooth One of the most difficult aspects of deploying machine learning in production is that the features used for training a model in the development environment are not the same as the characteristics in the production serving layer. As a result, establishing a consistent feature set across the training and serving layers allows for a more seamless deployment process, guaranteeing that the trained model accurately reflects how things will operate in production. Better model performance The feature store stores extra metadata for each feature in addition to the actual features. For example, a statistic that demonstrates the influence of a feature on the model with which it is related. This information may greatly assist data scientists in picking characteristics for a new model, allowing them to focus on those that have had the most influence on similar current models. Track lineage and address regulatory compliance It is critical to trace the lineage of algorithms being built to follow rules and laws, especially when the AI models being generated serve areas such as Healthcare, Financial Services, and Security. To do this, insight into the complete end-to-end data flow is required to better understand how the model generates its outputs. Because features are created as part of the process, it is necessary to follow the flow of the feature creation process. The lineage of the feature could be retained in a feature store. This offers the essential tracking information, capturing how the feature was developed, as well as the insight and reports required for regulatory compliance. Are there any drawbacks to the feature store? There are certain drawbacks of the feature store which are listed below. There is a Potential inflexibility in the feature store. Organizations require a unique feature store for each entity type.Complex integration may necessitate the integration of many technologies such as data warehouses, streaming pipelines, and processing engines.Limits model customization-Various applications may benefit from different feature encodings that would be unnoticed if they all used the same feature store. Creating a Feature store using Feast Let’s start with installing the Feast package. If using the google Colab notebook then first install the below-mentioned dependency. ! pip install tf-estimator-nightly==2.8.0.dev2021122109 %%sh pip install feast -U -q pip install Pygments -q The ‘%%sh’ is a shell command-line interpreter that will interpret the command line as it was in the Linux operating system. Create the feature repository ! feast init feature_repo Analytics India Magazine This command will initialize the feature repository which is a directory that holds the feature store’s settings as well as individual features. This configuration is written in Python/YAML. In this article, we will be using a demo repository by Feast. Let’s inspect the contents of the repository. %cd feature_repo !ls -R ‘%cd’ will change the directory to the repository created. Analytics India Magazine Let’s have a look at the project configuration. ! pygmentize feature_store.yaml Analytics India Magazine Now the feature repository is created and the project is configured. The data is to be loaded and the feature is to be defined. import pandas as pd from datetime import datetime, timedelta import sqlite3 from feast import FeatureStore raw_df=pd.read_parquet("data/driver_stats.parquet") raw_df.head() Analytics India Magazine In this demo repository, there is already a predefined file containing features so need to define features. Let’s have a look at those features. ! pygmentize -f terminal16m example.py Analytics India Magazine Now the features are defined and they need to be applied to the data. ! feast apply Analytics India Magazine The features have been applied and the data is ready to be split into training and testing sets. dict_df = pd.DataFrame.from_dict( { "driver_id": [1001, 1002, 1003], "label_driver_reported_satisfaction": [1, 5, 3], "event_timestamp": [ datetime.now() - timedelta(minutes=11), datetime.now() - timedelta(minutes=36), datetime.now() - timedelta(minutes=73), ], } ) store = FeatureStore(repo_path=".") training_data = store.get_historical_features( entity_df=dict_df, features=[ "driver_hourly_stats:conv_rate", "driver_hourly_stats:acc_rate", "driver_hourly_stats:avg_daily_trips", ], ).to_df() Analytics India Magazine Let’s upload these features to the online store so that they could be utilized globally in the hub. ! feast materialize-incremental {datetime.now().isoformat()} Analytics India Magazine Once the features are uploaded to the database there must be some directory to be formed. Let’s check those directories and try to extract the newly uploaded features from the database using SQL. print("--- Data directory ---") !ls data Analytics India Magazine There are three directories one with the raw data, the other with the updated features and the implementation of those features on the data (updated data). con_online = sqlite3.connect("data/online_store.db") print("\n--- Schema of online store ---") print( pd.read_sql_query( "SELECT * FROM feature_repo_driver_hourly_stats", con_online).columns.tolist()) con_online.close() Analytics India Magazine The ‘sqlite3’ will make a connection with the database and the data could be extracted by simply writing the SQL query. Conclusion Feature stores are the tool that organizations may utilize to generate several models based on one or a few entities. A feature store’s main advantage is that it contains the logic of feature transformations, allowing it to automatically transform fresh data and provide examples for training or inference. With this hands-on article, we have understood feature stores and their importance to a data science project. References Link to the above codeDocumentation for feast
|
A feature store is a tool used for storing commonly used features.
|
["AI Trends"]
|
["database management", "feature engineering", "feature selection"]
|
Sourabh Mehta
|
2022-05-24T15:29:58
|
2022
| 1,534
|
["data science", "Feast", "machine learning", "AI", "ML", "feature engineering", "database management", "Colab", "RAG", "analytics", "fraud detection", "feature selection", "Pandas"]
|
["AI", "machine learning", "ML", "data science", "analytics", "Feast", "Colab", "Pandas", "RAG", "fraud detection"]
|
https://analyticsindiamag.com/ai-trends/build-your-first-feature-store-with-feast/
| 4
| 10
| 2
| true
| true
| false
|
10,164,760
|
New Relic Integrates Agentic AI with ServiceNow to Boost IT Automation
|
New Relic has announced an AI-driven integration with ServiceNow, further expanding its open agent ecosystem. The collaboration aims to provide enterprises with deeper insights and intelligent recommendations while automating workflows to minimise downtime and revenue loss. By embracing agent-to-agent AI integrations connected via natural language APIs, New Relic’s Intelligent Observability Platform enables IT teams to process vast amounts of data efficiently. The platform identifies hidden issues and delivers actionable insights directly within ServiceNow’s interface, eliminating the need for context switching and reducing operational inefficiencies. Manav Khurana, chief product officer at New Relic, said, “Enterprises have more data than any team of engineers can handle. Our AI-driven integrations automate tasks, unify disparate data sources, and surface business-critical insights, allowing users to take immediate action.” The integration between New Relic and ServiceNow allows IT teams to access real-time production data, including errors, logs, security vulnerabilities, and alerts, all within their existing workflows. Through natural language processing, users can query performance insights, compare historical trends, and receive alert intelligence reports, significantly improving decision-making and incident response times. Brian Emerson, GVP and GM of IT operations management and cloud observability at ServiceNow, mentioned, “By integrating observability data from multiple vendors, including New Relic, into ServiceNow’s workflows, we’re creating a unified, agent-to-agent experience. This enables IT teams to focus on high-impact incidents and enhance customer experiences at scale.” Moreover, New Relic is introducing Predictions, a feature that uses machine learning to analyse historical data and forecast potential issues before they impact business operations. By identifying pre-incident patterns, Predictions enable IT teams to proactively address slow-burning problems and prevent digital disruptions. The AI-driven integrations with ServiceNow, Google Gemini, GitHub Copilot, and Amazon Q Business are now available within the New Relic Intelligent Observability Platform. New Relic Opens Office in Bengaluru In November last year, New Relic opened a new office in Bengaluru with space for over 300 employees. The office was designed to host teams from product, engineering, sales, finance, and operations, boosting innovation and better serving its growing global customer base. Earlier in March 2022, it opened its first office in Bengaluru, and in October 2022, the Hyderabad Product Innovation Centre was established. Notably, New Relic is expanding its Bengaluru presence to strengthen its presence in India.
|
The integration allows IT teams to access real-time production data, including errors, logs, security vulnerabilities, and alerts, within their existing workflows.
|
["AI News"]
|
["AI (Artificial Intelligence)", "Automation", "new relic"]
|
Vidyashree Srinivas
|
2025-02-27T16:22:16
|
2025
| 372
|
["Go", "API", "machine learning", "AI", "innovation", "Automation", "Git", "Aim", "disruption", "GitHub", "new relic", "R", "AI (Artificial Intelligence)"]
|
["AI", "machine learning", "Aim", "R", "Go", "Git", "GitHub", "API", "innovation", "disruption"]
|
https://analyticsindiamag.com/ai-news-updates/new-relic-integrates-agentic-ai-with-servicenow-to-boost-it-automation/
| 2
| 10
| 3
| false
| false
| false
|
10,002,126
|
Sony’s PlayStation 5 Finds An Ally In Microsoft, Can Upstage Google Stadia
|
Google broke into the cloud gaming space with the announcement of Google Stadia, an on-demand game streaming platform. This would be backed up by Google’s bevy of services and robust cloud infrastructure. Now, Sony seems to have caught up with Google, as its newest announcement portends towards the rise of cloud gamings. From performance improvements to accessibility, the PlayStation 5 will bring the future of gaming to every screen. The PS4, Replaced? Before anything, Sony made it clear that the PS4 is going to be the current generation of the PlayStation brand for at least the next 2 years. The console is still very profitable for Sony, and has a few exclusive game titles coming to the platform. The company even conducted a mid-generation refresh of the PS4 in the form of the PS4 Pro. The addition of more computing power allowed for the console to have 4K graphics and better performance. The PS4 Pro’s life cycle is not mature yet, which means that there are many games coming for the console. Moreover, Sony’s profits have not run dry, and the developer market for PS4 is still sizeable. In a corporate strategy meeting, Sony stated: “[The PlayStation 4] will remain the engine of engagement and profitability for the next three years.” Under The Hood Of ‘Next-Gen’ However, the fact remains that the PS5, which Sony referred to as the ‘next-gen’ console, is already beginning to be fleshed out. The console’s specifications have also surfaced, with AMD providing the guts for both Sony and Microsoft consoles this year. The AMD APU, with the codename Gonzalez, features AMD’s Zen 2 architecture for the CPU. It is built on the 7nm manufacturing process, and features the upcoming Navi architecture for the GPU. Moreover, the chip is also said to be capable of real-time ray tracing; a feature reserved previously for only Nvidia cards. Twitter user TUM_APISAK, who has leaked many details in the past, posted a serial number of the chip in the PS5; 2G16002CE8JA2_32/10/10_13E9. This makes the CPU a 3.6 GHz, 95W TDP, AM4 chip with 8 cores and, possibly, 16 threads. The GPU details are unknown as of yet. The PS5 is said to have 8+ teraFLOPs of power, and almost 2x increase over the last gen PS4 Pro. Added to this, it is equipped with 12GB of the latest gen GDDR6 RAM. Moreover, it is also said to come with an SSD to cut down on game loading times. The console can also support graphics with a resolution upto 8K. At decreased resolutions, it is possible that the machine could play games at a resolution of 4K at 60 frames per second. Today, PlayStation’s Remote Play option offers the option to play on any device, which is set for an “evolution” according to Sony. PlayStation Now, the on-demand game streaming service run by Sony, could also play a huge role in the hybrid future of the PS5. The crown gem of the PS5 could lie in its purported cloud gaming properties. An Unlikely Ally In a recent meeting, Sony showcased the power of the PS5, referred to as ‘next-gen’ in their marketing materials. Showing loading times on the Spiderman game for PS4, Sony showed that the next-gen console showed an almost 10x improvement. The time decreased from 8 seconds on the PS4 Pro to 0.8 seconds on the PS5. While this is possible with a custom SSD, which sources say runs 19 times faster than a normal one, this hints at cloud gaming hybrid. It is also similar to Google’s Instant Access philosophy, which showed Stadia loading games in about 3 seconds. To beat the 800 pound gorilla in the game streaming space, Sony partnered with its long-time console market rival, Microsoft. Under this partnership, the two giants will collaborate on a cloud gaming platform. Sony announced that they were entering into a partnership with Microsoft for use of their Azure cloud platform. Keeping in theme with this, Microsoft have positioned themselves as providing cloud game development service. They recently announced a host of services for game development on Azure. For cloud gaming, the most important part is how close the datacenter is to the end user in terms of latency. Microsoft has a lead in the cloud market and also features data centers in high population density regions all over the world. This explains the reason for Sony picking their rival. The partnership shows multiple things. Primarily, it shows Sony and Microsoft recognizing the dominance of Google as a web giant. The sheer reach that Google has across their suite of products is comparable only to Microsoft’s Windows platform. Moreover, the move also shows Microsoft’s expertise in the cloud market. All the parties have time to iron out all the details, as the cloud market as a whole progresses towards game streaming. Leviathans Begin Revolution Google’s Stadia is set to launch later this year, with the worldwide rollout taking place over the next few years. On the other hand, Sony and Microsoft are gearing up to launch their consoles in the second half of next year. Amazon, another prominent cloud services provider, has also expressed plans of moving into cloud-powered gaming. Microsoft also seems to be coming up with a budget version of the XBox for its next generation of consoles. Reportedly, this will not have a disc drive for physical games and could be powered by Microsoft’s xCloud streaming platform. The move towards cloud gaming seems to be the future of the gaming market, moving the game from the living room to any screen. The natural evolution of games has reached the cusp of its next step, as seen by industry leaders looking to take on the trend.
|
Google broke into the cloud gaming space with the announcement of Google Stadia, an on-demand game streaming platform. This would be backed up by Google’s bevy of services and robust cloud infrastructure. Now, Sony seems to have caught up with Google, as its newest announcement portends towards the rise of cloud gamings. From performance improvements […]
|
["Global Tech"]
|
["Azure", "Cloud Gaming", "Google", "Microsoft", "Sony", "stadia"]
|
Anirudh VK
|
2019-05-23T17:01:48
|
2019
| 950
|
["Go", "API", "Cloud Gaming", "cloud_platforms:Azure", "programming_languages:R", "AI", "R", "programming_languages:Go", "Ray", "Google", "Sony", "Azure", "Microsoft", "stadia"]
|
["AI", "Ray", "Azure", "R", "Go", "API", "cloud_platforms:Azure", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/global-tech/sonys-playstation-5-finds-an-ally-in-microsoft-can-upstage-google-stadia/
| 4
| 9
| 2
| false
| false
| false
|
10,040,498
|
Beginners Guide To Linear Regression In Python
|
Machine Learning is the scientific process of developing an algorithm that learns the pattern from training data and performs inferences on test data. If a machine learning process is meant to predict some output value, it is called supervised learning. On the other hand, if there is no output value prediction, it is called unsupervised learning. Training data in supervised learning contains a set of features and a target. The machine learning algorithm learns from the features to map corresponding targets. Test data contains only features so that the model should predict the targets. Features and targets are also called independent variables and dependent variables, respectively. Training data in unsupervised learning contains only features but not any target. Rather than mapping features and targets as in supervised learning, an unsupervised learning model performs clustering (grouping) the input data based on the patterns among them. Supervised learning is classified into two categories: Regression Classification Supervised learning is called regression if the dependent variable (aka target) is continuous. Supervised learning is called classification if the dependent variable is discrete. In other words, a regression model outputs a numerical value (a real floating value), but a classification model outputs a class (among two or more classes). In this article, we discuss linear regression and its implementation with python codes. Regression analysis can be specifically termed linear regression if the dependent variable (target) has a linear relationship with the independent variables (features). The Math behind Linear Regression Suppose a collection of data has two variables: one is the independent variable (X), and another is the dependent variable (Y). If the relationship between Y and X can be expressed as: Y = mX + c, this is called linear regression. Here, X is linearly scaled with a weight m to determine the value of Y and c is called bias or y-intercept with which the dependency offsets. A machine learning model has to determine the most suitable values for weight, m and bias, c. If there are more than one independent variable, there will be a corresponding number of weights, w1, w2, w3, and so on. Typically, a machine learning problem contains a remarkable amount of data. A linear regression model assigns random values to weights and bias at the beginning. When learning commences, the model is fed with one data point in each step. It fits the X values and determines the target. Since weights are randomly assigned initially, the predicted target will differ greatly from the actual target. The model calculates the difference between the actual target value and the predicted target value, which is called the loss. The model scientifically reassigns the values of weights to reduce this loss. With each data point, the model iteratively attempts to find suitable weights that yield minimum loss. The most preferred losses are mean absolute error (MAE) and mean squared error (MSE). Mean absolute error is the mean value of the sum of differences between predicted and actual target values for all data points. Mean squared error is the mean value of the sum of squares of differences between predicted and actual target values for all data points. Linear regression employs mean squared error (MSE) as its loss function. When learning is finished, the loss value will be at its minimum. In other words, the predicted value will be as close as possible to the actual target value. We try to get a better understanding in the sequel with a practical problem and hands-on Python implementation. Load a Regression Data Import necessary libraries and modules. import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.datasets import load_diabetes Load a regression problem dataset from SciKit-Learn’s in-built datasets. Data is already preprocessed and normalized, and is ready to use. data = load_diabetes() data.keys() Output: Generate features and target. Visualize the top 5 rows of the data. features = pd.DataFrame(data['data'], columns=data['feature_names']) target = pd.Series(data['target'], name='target') feat.head() Output: Simple Linear Regression Simple linear regression is performed with one dependent variable and one independent variable. In our data, we declare the feature ‘bmi’ to be the independent variable. Prepare X and y. X = features['bmi'].values.reshape(-1,1) y = target.values.reshape(-1,1) Perform linear regression. simple = LinearRegression() simple.fit(X,y) The training is completed. We can explore the weight (coefficient) and bias (intercept) of the trained model. simple.coef_ Output: simple.intercept_ Output: Calculate the predictions following the formula, y = intercept + X*coefficient. calc_pred = simple.intercept_ + (X*simple.coef_) Predictions can also be calculated using the trained model. pred = simple.predict(X) We can check whether the calculated predictions and model’s predictions are identical. (calc_pred == pred).all() Output: Plot the actual values and predicted values to get a better understanding. # plot actual values plt.scatter(X,y, label='Actual') # plot predicted values plt.plot(X,pred, '-r', label='Prediction') plt.xlabel('Feature X') plt.ylabel('Target y') plt.title('Simple Linear Regression', color='orange', size=14) plt.legend() plt.show() Output: According to SciKit-Learn’s LinearRegression method, the above red line is the best possible fit with minimal error value. We can calculate the mean squared error value for the above regression using the following code. mean_squared_error(y, pred) Output: This error value seems too high because of the nature of the actual data. It can be observed from the above plot that the target has multiple values corresponding to a single feature value. The data is highly scattered, which can not be fit completely with a straight line. However, we may wish to conclude how good the fit is. The error just yields an incomparable number. A parameter named Coefficient of Determination (CoD) is helpful in this case. CoD gives the ratio of the regression sum of square to the total sum of the square. Total sum of squares (SST) is the sum of deviations of each y value from the mean value of y. Regression sum of squares (SSR) is the difference between the total sum of squares and the sum of squared error (SSE). When there is no error (MSE = 0), CoD becomes unity. When the sum of squared error equals the total sum of squares (SSE = SST), CoD becomes zero. CoD = 1 refers to the best prediction CoD = 0 refers to the worst prediction CoD gives a limit [0,1], thus makes the predictions comparable. CoD is also called R-squared value. It can be calculated using the following code. simple.score(X,y) Output: With high scatteredness in data, 0.34 is the best possible fit by linear regression. Multiple Linear Regression Multiple linear regression is performed with more than one independent variable. We choose the following columns as our features. columns = ['age', 'bmi', 'bp', 's3', 's5'] Let’s have a look at the data distribution by plotting it. for i in columns: plt.scatter(features[i], y) plt.xlabel(str(i)) plt.show() Output: It is observed that each individual feature has scatteredness in nature. But, the variation in target values for a single input feature value may be explained by some other features. In other words, the target value may find difficulty in fitting a linear regression model with a single feature. Nevertheless, it may yield an improved fit with multiple features by exploring the true pattern in the data. In the simple linear regression implementation, we have used all our data to fit the model. But, how can we test our model? How far will our model perform on unforeseen data? This is where the train-test-split comes into play. We split our dataset into two sets: a training set and a validation set. We train our model with training data only and evaluate it with the validation set. Let’s split the dataset into training and validation sets. from sklearn.model_selection import train_test_split X = features[columns] # 70% training data, 30% validation data X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=6) Build a linear regression model and fit the data. multi = LinearRegression() multi.fit(X_train, y_train) What are the weights (coefficients) of our model? There should be five coefficients each corresponding to each feature. multi.coef_ Output: What is the intercept (bias) of our model? multi.intercept_ Output: We have built and trained our model. Let’s predict the target values corresponding to the features in the validation data. pred = multi.predict(X_val) We can evaluate the model by calculating the error or R-squared value. mean_squared_error(y_val, pred) Output: Calculate the R-squared value for both training set and validation set. multi.score(X_train, y_train), multi.score(X_val, y_val) Output: With more features, the model’s performance rises up. Using statsmodels Library We have used the SciKit-Learn library so far to perform linear regression. However, we can use the statsmodels library to perform the same task. Fit the training data on the OLS (Ordinary Least Squares) model available in the statsmodels library. import statsmodels.api as sm # add constant (intercept) manually X_train = sm.add_constant(X_train) # fit training data model = sm.OLS(y_train, X_train).fit() model.summary() Output: It can be observed that the model weights, intercept and the R-squared value are all identical to the Linear Regression method of the SciKit-Learn library. The model can be implemented to make predictions on validation data too. # Constant (intercept) must be added manually X_val = sm.add_constant(X_val) preds = results.predict(X_val) mean_squared_error(y_val, preds) Output: The errors are the same for both the methods! This notebook contains the above code implementation. Wrapping Up In this article, we have discussed machine learning, its classification, and categorization of supervised learning based on the nature of dependent variables. Further, we explored simple linear regression and multiple linear regression with examples using the SciKit-Learn library. We performed the same task with the statsmodels library and obtained the same results. References and Further Reading: SciKit-Learn’s LinearRegression modulestatsmodels’ OLS moduleLinear Regression Cheat sheetRead about Linear Regression on WikiComprehensive Guide To Regression For DummiesHow To Do Linear Regression In Excel
|
Linear regression is a machine learning task finds a linear relationship between the features and target that is a continuous variable.
|
["Deep Tech"]
|
["Guide", "linear regression", "scikit learn"]
|
Rajkumar Lakshmanamoorthy
|
2021-05-23T13:00:00
|
2021
| 1,603
|
["linear regression", "scikit-learn", "NumPy", "machine learning", "TPU", "AI", "ML", "Python", "scikit learn", "Matplotlib", "R", "Guide", "Pandas"]
|
["AI", "machine learning", "ML", "scikit-learn", "Pandas", "NumPy", "Matplotlib", "TPU", "Python", "R"]
|
https://analyticsindiamag.com/deep-tech/beginners-guide-to-linear-regression-in-python/
| 4
| 10
| 0
| true
| true
| true
|
31,107
|
Intel & Brazilian Robotics Company Hoobox Build World’s First AI Wheelchair
|
In an attempt to leverage artificial intelligence for good, Brazilian robotics company Hoobox Robotics released a Wheelie 7 kit powered by Intel. The Wheelie 7, a motorized wheelchair allows people with greater mobility by helping them control it with simple facial expressions. The AI-powered wheelchair uses AI and a camera, without invasive body sensors, providing users with independence and control over their location. As per the company statement, there are more than 60 people in US, testing the Wheelie 7 – most of whom are quadriplegics, people with amyotrophic lateral sclerosis or senior citizens. Anna Bethke, leader of AI for Social Good, Intel remarked, “Today on International Day of Persons with Disabilities, it’s important to recognize the ways technology can help people regain mobility and control of their lives. The Wheelie 7 kit from HOOBOX Robotics is a great example of using AI to enable people with limited mobility to move around using natural facial movements.” Dr Paulo Pinheiro, co-founder and CEO of HOOBOX Robotics claims that the Wheelie 7 is the first product to use facial expressions to control a wheelchair. This requires incredible precision and accuracy, and it would not be possible without Intel technology,” said Dr. Paulo Pinheiro, co-founder and CEO of HOOBOX Robotics. Interestingly, it just takes around seven minutes to install the Wheelie 7 kit and the three main functions people can perform with the AI-powered wheelchair are — moving forward, turning and stopping. The statement indicated that instead of invasive body sensors, the Wheelie 7 is outfitted with a 3D Intel® RealSense™ Depth Camera SR300 mounted on the wheelchair to stream data that AI algorithms process in real time to control the chair. Given the importance of immediate responsiveness, HOOBOX uses Intel processors and the Intel® Distribution of OpenVINO™ Toolkit to speed up the inferencing of facial recognition software. According to data from National Spinal Cord Injury Statistical Center, there are approximately 288,000 people in US living with spinal cord injuries, and about 17,700 new cases crop up every year. A 2018 study found that physical mobility has the largest impact on the quality of life for people with spinal cord injuries. Intel and Hoobox have taken up a concrete step in helping people regain their mobility through the motorized wheelchair, outfitted with complex sensors placed on the body. This also require special education to operate, or through caregivers.
|
In an attempt to leverage artificial intelligence for good, Brazilian robotics company Hoobox Robotics released a Wheelie 7 kit powered by Intel. The Wheelie 7, a motorized wheelchair allows people with greater mobility by helping them control it with simple facial expressions. The AI-powered wheelchair uses AI and a camera, without invasive body sensors, providing […]
|
["AI News"]
|
["Intel"]
|
Richa Bhatia
|
2018-12-05T13:04:23
|
2018
| 395
|
["Go", "artificial intelligence", "programming_languages:R", "AI", "programming_languages:Go", "RAG", "Aim", "ai_applications:robotics", "R", "Intel"]
|
["AI", "artificial intelligence", "Aim", "RAG", "R", "Go", "programming_languages:R", "programming_languages:Go", "ai_applications:robotics"]
|
https://analyticsindiamag.com/ai-news-updates/intel-brazilain-robotics-company-hoobox-build-world-first-ai-wheelchair/
| 4
| 9
| 0
| true
| true
| false
|
10,035
|
Is it time for more focus on analytics at undergraduate education?
|
I do not remember precisely when I picked up statistics. It was not till after my undergraduate education for sure. I had basic understanding of probability, correlation and regression but that’s about it. Most of what I know about stats and analytics were gradually picked up on job. And that’s a real shame. Our basic education has not given much importance to statistics as a discipline, for various reasons. I did an exercise; I picked the syllabus of common mathematics subjects taught in first year of my undergrad. Findings: statistics was almost non-existent. There is inadequately more focus on calculus within applied mathematics. Yes, calculus finds usage in countless areas, but so does statistics. And isn’t the focus for most applied professionals undergrad degrees (like engineering), the real usage in the industry? One of the reasons for statistics being overlooked traditionally is that statistics has never been considered part of mathematics. Staunch mathematicians have always looked at statistics with deep skepticism. For one thing, statistics have a huge dependency on mathematics, but so do other disciplines like physics, biology, accounting etc. Another reason for this skepticism is the fact that statistics unlike mathematics is not an exact science. Mathematicians like to see things in absoluteness, certainty, like 2 + 5 is 7 and it will always be that. Statistics, on the other hand, is the science of uncertainty, of understanding and measuring the non-exact world that we live in. A probability of 0% for an event to occur can still happen (think of the home prices crash of 2006-08). While the debate whether statistics is mathematics rages on in academic circles, my fear is that statistics (and eventually the analytics) falls in the same trap as financial engineering. Most engineering graduates today fumble in basic concepts of accounting when it is used in almost all walks of professional life. Just like most of engineering graduates struggle with basic concepts of statistics, like probability distributions. True that statistics and financial engineering have been given a deep focus in business schools in some way or form. Yet, isn’t it too late to have these disciplines formally introduced? Alternatively, we have seen many post grad programs in analytics come up in recent year, but rarely any at undergrad level. I hope and wish that the focus on analytics increases early on in our education. Either through introduction of formal analytics courses at undergrad levels, or gradually pushing to have statistics being embraced by academicians of applied disciplines. There is no doubt in mind that applied usage of statistics in current times is extremely high, even for someone who’s not exactly working on analytics per se. And this is only going to increase over time. A good foundation of statistics at early stages of education would go a long way in creating better-skilled future professionals.
|
I do not remember precisely when I picked up statistics. It was not till after my undergraduate education for sure. I had basic understanding of probability, correlation and regression but that’s about it. Most of what I know about stats and analytics were gradually picked up on job. And that’s a real shame. Our basic […]
|
["AI Trends"]
|
[]
|
Дарья
|
2016-05-30T05:15:30
|
2016
| 471
|
["Go", "programming_languages:R", "AI", "programming_languages:Go", "RAG", "analytics", "R"]
|
["AI", "analytics", "RAG", "R", "Go", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-trends/time-focus-analytics-undergraduate-education/
| 2
| 7
| 2
| false
| false
| false
|
10,044,261
|
Real-World Blind Face Restoration with Generative Facial Prior
|
Technology and Technological developments in this decade have led to some of the most awe-inspiring discoveries. With rapidly changing technology and systems to support them and provide back-end processing power, the world seems to be becoming a better place to live day by day. Technology has reached such new heights that nothing our ingenious mind today thinks about looks impossible to accomplish. The driving factor of such advancements in this new era of technological and computational superiority seems to be wrapped around two of the most highly debated domains and topics, namely Machine Learning & Artificial Intelligence. The canvas and ideal space that these two domains provide are unfathomable. Breakthrough discoveries in both Machine Learning & Artificial Intelligence have pushed the boundaries of technological need even further and shown us the possibilities both possess. Discoveries in both these fields have also led to Industrial Development and advancements in System Automation and Digitization. A wide spectrum of Robotics, Healthcare, Fintech, telecommunications, Image and Videography, to name a few, have all benefited from the advancements. Hence, Legacy players such as Google, Amazon and Facebook have been investing in further research and development in the domain of both Machine Learning and Artificial Intelligence and trying to answer the big question of today’s era, “What Next?”. Machines are being taught and made more and more intelligent and self-sufficient. Machines loaded with appropriate functional software and algorithms can be made capable enough to perform tasks that the mere human generally cannot. One such task that comes across and seems to be a buzz around social media these days is Restoring Old Images and making them brand new. Sometimes during the process of image acquisition or storage, images might get degraded for various reasons. Degradation may come in various forms, such as motion blur, noise, and camera misfocus. Image restoration aims to generally compensate for or, in other terms, “undo” defects that have degraded an image. Image restoration is a challenging task in the field of Image processing. Image restoration comes across as a complex and challenging task in the field of Image processing. The restoration process improves the image’s appearance, and the main goal is to restore it to how it looked when it was first synthesized originally. The degraded image can be described as the convolution of the original image, degraded function, and additive noise. The process of restoration deconvolves the degraded image to obtain a noiselessly and deblurred original image. To restore the image to its original form, the knowledge of degradation and how it happens must be inculcated. In the case of Image Restoration through artificial intelligence and involving high computational processing power, the knowledge around degradation is taught to a machine or a system through algorithmic models and machine and Deep Learning Techniques. But Image Restoration is not to be confused with Image Enhancement. They are both similar yet different respectively. Image restoration is different from Image Enhancement as Restoration is more of an Objective operation, whereas Enhancement appears to be Subjective. A mathematical function cannot precisely represent image Enhancement, whereas, in Image restoration, it is related more to feature extraction from the imperfect image. Image restoration assumes a degradation model that is known or can be estimated. Image Enhancement is the process that aims to improve bad images so they will “look” better, but restoration aims to invert known degradation operations applied to images. About Blind Face Restoration with Generative Facial Prior Blind face restoration aims at recovering high-quality faces from the low-quality image counterparts suffering from degradation, which may have been caused due to multiple factors such as the image being low-resolution, noise, blur, compression artefacts, or other various factors. When applied to real-world scenarios, restoration becomes a more challenging task due to more complicated and severe degradation, diverse poses or expressions. Therefore, blind face restoration relies on facial priors and facial factors such as facial geometry prior or references before help restore realistic and faithful details. The Blind Face Restoration model uses Generative Facial Prior (GFP) for real-world blind face restoration, which comes along implicitly encapsulated with a pretrained Generative Adversarial Network (GAN) model such as StyleGAN. These facial GANs can generate faithful faces, even in images with a high degree of variability, thereby providing rich and diverse priors such as geometry, facial textures, and colors, making it possible to jointly restore facial details and enhance its colors exactly to the original captured. Traditional restoration models typically used the GAN inversion method. They first ‘invert’ the degraded image back to a latent state of the pretrained GAN and then perform image specific optimization techniques to reconstruct the fed images. On the contrary, GFP-GAN uses delicatedesigns to achieve a good balance of realness and fidelity in a single forward pass of image processing. GFPGAN consists of a degradation removal module and a pretrained face GAN as facial feature capturer, inter-connected by a direct latent code mapping, and several Channel-Split Spatial Feature Transform (CS-SFT) layers in a coarse-to-fine manner in the model. The CS-SFT layers perform spatial modulation on a split of features and leave the left behind features to directly pass through for better information preservation, allowing it to effectively incorporate generative features while retraining the high fidelity of the input image. It also comprises facial component loss with local discriminators to further enhance the perceptual facial details in the image while emphasizing identity preservation to further improve image fidelity. Architecture of GFP-GAN Framework The Architecture consists of a degradation removal module (U-Net) and a pretrained face GAN as a facial feature recognizer. They both are inter-connected by a latent code mapping technique and several Channel-Split Spatial Feature Transform (CS-SFT) layers. GFP-GAN consists of a degradation removal module called U-Net and a pre-trained face GAN (such as StyleGAN2). Specifically, the degradation removal module is designed to remove the complicated degradation in the input image and extract two kinds of features: latent features Flatent to map the input image to the closest latent code in StyleGAN2 multi-resolution spatial features for modulating the StyleGAN2 features. During the model training, it emphasizes the following : Intermediate restoration losses to remove complex degradation Facial component loss with discriminators to enhance facial details. Identity preserving loss to retain face identity. Image Source: https://arxiv.org/pdf/2101.04061.pdf Similar to perceptual loss, the identity preserving loss feature is based on the feature embedding of an input face. It comes included with the pretrained face recognition ArcFace model, which captures the most prominent features for identity discrimination in the input image. GFP-GAN comes pre-trained on the FFHQ dataset, which consists of around 70 000 high-quality images. All the images were resized to 5122 during training. GFP-GAN is trained on synthetic data that approximates real low-quality images and generalizes to real-worldimages during the inference output synthesis. A Comparison Of GFP-GAN To Other Models As it is noticed, the GFP-GAN model for Blind Face Restoration retains the Image Quality and restores the Facial features present in the image better than the traditionally used models present. Getting Started This article will try to implement a Face Restoration Model using the Generative Facial Prior model and restore degraded images that contain noise and blur. The creators of GFP-GAN inspire the following implementation, and the link to their official repository can be found here. Setting Up The Environment To start building our image restoration model, we will first install all the required dependencies to set up the environment for our model. #installing the dependencies # Install pytorch !pip install torch torchvision # Check torch and cuda versions import torch print('Torch Version: ', torch.__version__) print('CUDA Version: ', torch.version.cuda) print('CUDNN Version: ', torch.backends.cudnn.version()) print('CUDA Available:', torch.cuda.is_available()) Next up, we will install BasicSR, an open-source image and video restoration library based on PyTorch. !BASICSR_EXT=True pip install basicsr We will also be using FaceXLib, which is a PyTorch-based library for face-related functions, such as detection, alignment, recognition, tracking, utils for face restorations. !pip install facexlib !mkdir -p /usr/local/lib/python3.7/dist-packages/facexlib/weights # for pre-trained models Importing the GFP-GAN model, !rm -rf GFPGAN !git clone https://github.com/TencentARC/GFPGAN.git %cd GFPGAN # install extra requirements !pip install -r requirements.txt Further loading the pre-trained GAN model #loading the pretrained GAN Models !wget https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/GFPGANv1.pth -P experiments/pretrained_models Performing Operations We will start the process by first providing the input to our model with low-quality and noisy images to be restored. # visulize the cropped low-quality faces import cv2 import matplotlib.pyplot as plt #setting the image path def imread(img_path): img = cv2.imread(img_path) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) return img # read images img1 = imread('inputs/cropped_faces/Adele_crop.png') img2 = imread('inputs/cropped_faces/Julia_Roberts_crop.png') img3 = imread('inputs/cropped_faces/Justin_Timberlake_crop.png') img4 = imread('inputs/cropped_faces/Paris_Hilton_crop.png') # show images fig = plt.figure(figsize=(25, 10)) ax1 = fig.add_subplot(1, 4, 1) ax1.imshow(img1) ax1.axis('off') ax2 = fig.add_subplot(1, 4, 2) ax2.imshow(img2) ax2.axis('off') ax3 = fig.add_subplot(1, 4, 3) ax3.imshow(img3) ax3.axis('off') ax4 = fig.add_subplot(1, 4, 4) ax4.imshow(img4) ax4.axis('off') Output : Now we will use the GFP-GAN to restore the above low-quality images. To do so, we will implement the following code, # --model_path: the path to the pre-trained GFPGAN model # --test_path: the folder path to the low-quality images # --aligned: whether the input images are aligned !python inference_gfpgan_full.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/cropped_faces --aligned Loading the results, # Loading the results! !ls results Visualizing the output, images on the left were our fed inputs and the images on the right show the processed output. import cv2 import matplotlib.pyplot as plt def imread(img_path): img = cv2.imread(img_path) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) return img # read images img1 = imread('results/cmp/Adele_crop_00.png') img2 = imread('results/cmp/Julia_Roberts_crop_00.png') img3 = imread('results/cmp/Justin_Timberlake_crop_00.png') img4 = imread('results/cmp/Paris_Hilton_crop_00.png') # show images fig = plt.figure(figsize=(15, 30)) ax1 = fig.add_subplot(4, 1, 1) ax1.imshow(img1) ax1.axis('off') ax2 = fig.add_subplot(4, 1, 2) ax2.imshow(img2) ax2.axis('off') ax3 = fig.add_subplot(4, 1, 3) ax3.imshow(img3) ax3.axis('off') ax4 = fig.add_subplot(4, 1, 4) ax4.imshow(img4) ax4.axis('off') Output : As we can see from the output, several different types of degradation of the images such as noise, feature degradation were removed. They were processed back with enhanced color and originality. We can implement the same on just a particular facial area of the low-quality input image. # Feeding as input images to be restored import cv2 import matplotlib.pyplot as plt def imread(img_path): img = cv2.imread(img_path) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) return img # read images img1 = imread('inputs/whole_imgs/00.jpg') img2 = imread('inputs/whole_imgs/10045.png') # show images fig = plt.figure(figsize=(25, 10)) ax1 = fig.add_subplot(1, 2, 1) ax1.imshow(img1) ax1.axis('off') ax2 = fig.add_subplot(1, 2, 2) ax2.imshow(img2) ax2.axis('off') Applying restoration on just the facial area particularly using the following code, #setting path and processing !rm -rf results !python inference_gfpgan_full.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/whole_imgs #loading results !ls results/cmp # Visualizing the results import cv2 import matplotlib.pyplot as plt def imread(img_path): img = cv2.imread(img_path) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) return img # read images img1 = imread('results/cmp/00_00.png') img2 = imread('results/cmp/00_01.png') img3 = imread('results/cmp/10045_02.png') img4 = imread('results/cmp/10045_01.png') # show images fig = plt.figure(figsize=(15, 30)) ax1 = fig.add_subplot(4, 1, 1) ax1.imshow(img1) ax1.axis('off') ax2 = fig.add_subplot(4, 1, 2) ax2.imshow(img2) ax2.axis('off') ax3 = fig.add_subplot(4, 1, 3) ax3.imshow(img3) ax3.axis('off') ax4 = fig.add_subplot(4, 1, 4) ax4.imshow(img4) ax4.axis('off') Output : EndNotes This article tried to understand how image restoration works, which is a vital part of the image processing phase. We also tried to implement an Image restoration model based on the Generative Facial Prior model and restored degraded images. I would recommend the reader to try the same on more complex degraded images to explore the further capabilities of the model. The above implementation is available as a colab notebook, which can be accessed using the link here. Happy Learning! References Official GFP-GAN Paper Introduction to Image RestorationWhat is Image Restoration
|
Technology and Technological developments in this decade have led to some of the most awe-inspiring discoveries. With rapidly changing technology and systems to support them and provide back-end processing power, the world seems to be becoming a better place to live day by day. Technology has reached such new heights that nothing our ingenious mind […]
|
["AI Trends"]
|
["AI (Artificial Intelligence)", "GANs", "Image Classification", "image processing", "image reconstruction", "StyleGAN"]
|
Victor Dey
|
2021-07-24T18:00:00
|
2021
| 1,909
|
["artificial intelligence", "machine learning", "TPU", "AI", "image processing", "image reconstruction", "PyTorch", "StyleGAN", "Colab", "RAG", "Aim", "deep learning", "GANs", "AI (Artificial Intelligence)", "Matplotlib", "Image Classification"]
|
["AI", "artificial intelligence", "machine learning", "deep learning", "Aim", "PyTorch", "Colab", "Matplotlib", "RAG", "TPU"]
|
https://analyticsindiamag.com/ai-trends/real-world-blind-face-restoration-with-generative-facial-prior/
| 4
| 10
| 0
| true
| false
| true
|
53,919
|
Top Paper Presentations You Must Not Miss At MLDS 2020
|
Just a few days away now, Machine Learning Developers Summit, which is to be held on 22-23 Jan in Bengaluru and on 30-31 Jan in Hyderabad, has created a buzz around the tech community. With MLDS, Analytics India Magazine aims to bring in researchers and innovators together on one platform, where they will be presenting their research papers on various topics like machine learning, deep learning, and robotic process automation (RPA). In this article, we list down the top paper presentation at MLDS that the attendees should not miss: 1| A Novel Approach For Product Recommendation Engine Using Graph Database Location: Bengaluru Presenter: Naman Mishra, Data Scientist at Genpact About: In this paper, the author will be presenting a novel approach using graph algorithms for building a product recommendation solution for a publishing company. He will be talking about the developed approach that focuses on the popular books and courses inside a local community identified by the graph algorithms to generate recommendations. 2| Emotional Stress Detection Using Deep Learning Location: Bengaluru Presenter: Nithya Vasudevan Analyst, Data Science at Verizon About: In this paper, the author will be presenting an idea where they use neural network architectures for attention mechanisms to spot the people who are suffering from prolonged stress. He will also talk about the offered solution that can be used for the long term sufferers by predicting and analysing their emotions using brainwaves recorded through Neurosky Brainwave Headset. 3| Revolutionalising Safety In Railways Using Computer Vision Location: Bengaluru Presenter: Vibhav Patil, Senior Machine Learning Engineer at Racetrack.ai About: In this paper, the author will be presenting a complete solution of critical issues like rail accident, dying of animals on the railway tracks, etc. using computer vision technology. He will also be talking on methods based on spatial-temporal relationships, long-range cameras as well as to object detection using deep learning models. 4| A Heuristic Optimisation Solution For The Selection Of The Transformation Functions For Media Channels In Marketing Mix Modelling (MMM) Location: Bengaluru Presenter: Madhav Kaushik, AVP at Analyttica Datalab About: In this paper, the author will be presenting a heuristic optimisation methodology for the selection of the transformation functions for the media channels. He will also be explaining how it is automated and how it has been developed into a prototype that can be applied in similar situations towards the process of measuring “Above the Line” (ATL) marketing effectiveness and optimisation. 5| NLP Driven Qualitative Assessment Of Product Description Location: Bengaluru Presenter: Siddharth Vij, Data Scientist at Happiest Minds About: In this paper, the author will be presenting an automated mechanism using lexical, syntactic, and contextual NLP techniques to assess the quality of product descriptions by scoring the usage of personal, sensorial, functional and superlatives in a description. 6| Artificial Intelligence For Simplified Deployments Location: Bengaluru Presenter: Suchit Mathur, Product Expert at SAP Labs India About: In this paper, the author will be presenting an approach to automate custom manual actions using RPA framework and machine learning that enabled natural language processing system for providing inputs to a system using voice, text, etc. 7| Measuring Digital Marketing Effectiveness Using Incrementality Location: Bengaluru Presenter: Shubham Gupta, Data Scientist at MiQ Digital About: In this paper, the author will be presenting different approaches to calculate incremental lift that can be implemented in the digital marketing ecosystem such as viewability. He will also talk about the concepts of test environment setup, randomisation, bias handling, hypothesis testing, primary output, and understanding different ways of using the output of incremental lift — for instance, strategy planning and optimisations, and achieving higher campaign efficiency, among others. 8| Automated Short Answer Grading Using Conventional And Modern NLP Techniques Location: Bengaluru Presenter: Bharath Kumar Bolla Senior Data Scientist at Happiest Minds About: In this paper, the author will be presenting research into Automatic Short Answer Grading (ASAG) by combining conventional and advanced Natural Language Processing (NLP) methods which include various embeddings ranging from word level, sentence level and contextual. He will also be presenting a comprehensive evaluation of various embedding techniques (word2vec, FastText, ELMo, Skip-Thoughts, Quick-Thoughts, FLAIR embeddings, InferSent, Google’s Universal Sentence Encoder and BERT) with respect to short text similarity. 9| Building An AI-driven Logistics Platform Location: Hyderabad Presenter: Rishit Jain, Product Manager, Data Science and AI Strategy at Delhivery About: In this paper, the author will be presenting key tenets and levers of building an AI platform from scratch and specific challenges that are unique to AI products. He will be talking on factors of system driven allocation such as ensuring fairness and equal opportunity, utilising local ground intelligence, and complexity of delivering a shipment. 10| Predicting Product Success Using AI In Video And Audio Analytics Location: Hyderabad Presenter: Govind Maheswaran, Senior Consultant at Ernst & Young About: In this paper, the author will be presenting a methodology of automating the role of a moderator in the Focus Group discussions using artificial intelligence, where the proposed solution uses machine learning and deep learning to process the video and audio streams of a Focus Group. 11| Leveraging BERT + Deep Learning For Impactful Analysis On Streaming News And Events Location: Hyderabad Presenter: Akshay Sharma, Sr. Data Scientist at Tidyquant About: In this paper, the author will be presenting a model which is capable of processing huge amounts of data from multiple sources like tweets, websites, documents, etc., and provides deep insights by identifying the context of the data which has been provided to it. He will also be talking about how the solution works over Bidirectional Encoder Representations (BERT), the neural network-based technique for Natural Language Processing (NLP) pre-training. 12| Partitioning Nearest Neighbour Approach To Regression Variation Improvement In Tree-Based Approaches Location: Hyderabad Presenter: Abhinav Mathur, Data Scientist at Clinton Health Access Initiative About: In this paper, the author will be presenting a hybrid approach of using two intuitive and explainable algorithms, CART 2 and k-NN 3 regression to improve the generalisations and sometimes the runtime for regression-based problems. 13| Algorithm To Recommend Corrective Actions For Yaw Misalignment In A Wind Turbine Location: Hyderabad Presenter: Malavika Peedinti, Assistant Manager – Data Analytics at BLP Clean Energy About: In this paper, the author will be explaining how machine learning is being used to detect yaw error right after its onset, identifying the root cause and quantifying the consequent energy loss. Peedinti will also discuss the detection of yaw misalignment is done by analysing the wind direction, nacelle position, and yaw angle data captured by the Supervisory Control and Data Acquisition (SCADA) system installed at the wind power plant. 14| Detecting and Predicting Price Change Acclimation Location: Hyderabad Presenter: Kajal Anajwala, Senior Manager, Global Advanced Analytics – Diageo About: In this paper, the author will be presenting a model which will detect the time taken for demand to come back to a normal level after the price change. It then predicts how long it takes for demand to come back to a normal level if there is a plan for a price increase in the future. 15| Approaches For Predicting Potential Cancerous Cell Formation in the Fetus Location: Hyderabad Presenter: Debashish Banerjee, CEO at Blackstone Synergy About: In this paper, the author will be presenting mathematical models exploring the parametric influences of the pathological changes, the hormonal changes, the relative coordinates of the fetus, the relative growth ratio of the brain and the body, and finally the functional MRI derivatives for cell morphology and cytoplasmic changes that signal potential mutants in the embryonic development evolution. 16| Content And Author Identification In Indian English, Hindi and Bangla Location: Hyderabad Presenter: Subhabrata Banerjee, Computational Linguist at HCL Technologies. About: In this paper, the author will be presenting a combined solution in Indian English, Hindi and Bangla for content detection and author identification. Banerjee will also be discussing the work exploits three online monolingual corpora of plain text, as well as Named Entity, annotated text. 17| Machine Learning Application To Create Experimental Learning Models For Personal Finance Location: Hyderabad Presenter: Bharat Shah, Chartered Accountant and ISB AMPBA alumnus About: In this paper, the author will be presenting a predictive model which applies Unsupervised Learning and Supervised Learning to achieve different clusters of income and expenses to help in identifying the pattern of income and expense. He will also talk about how it provides the ability to visualise someone’s income and expense pattern for 5-10 year. 18| Smart Job Order Prioritization with AI Location: Hyderabad Presenter: Shiva Tyagi, Machine Learning Engineer at TCS About: In this paper, the author will be presenting a solution which focuses on helping recruiters prioritise job requests based on probability score of request completion thereby increasing fill rate as the number of job request completed and reduce the time-taken, ensuring higher value return with relatively less effort. 19| Domain-Specific Word Segmentation and Hierarchy Detection using NLP Algorithm Location: Bengaluru Presenter: Prakash Selvakumar AVP at Genpact About: In this paper, the author will be presenting a machine learning approach to build a word segmentation algorithm and also find the hierarchical structure in the text (for example Header1 or Header2, etc.) The paper focuses on a machine learning approach to perform word segmentation and Hierarchy detection on medical documents (in the form of editable digital materials like PDFs).
|
Just a few days away now, Machine Learning Developers Summit, which is to be held on 22-23 Jan in Bengaluru and on 30-31 Jan in Hyderabad, has created a buzz around the tech community. With MLDS, Analytics India Magazine aims to bring in researchers and innovators together on one platform, where they will be presenting […]
|
["Deep Tech"]
|
["machine learning research", "Ml research papers", "research paper"]
|
Ambika Choudhury
|
2020-01-15T18:02:51
|
2020
| 1,539
|
["machine learning research", "Ml research papers", "research paper", "data science", "artificial intelligence", "machine learning", "AI", "neural network", "ML", "computer vision", "NLP", "deep learning", "analytics"]
|
["AI", "artificial intelligence", "machine learning", "ML", "deep learning", "neural network", "NLP", "computer vision", "data science", "analytics"]
|
https://analyticsindiamag.com/deep-tech/top-paper-presentations-you-must-not-miss-at-mlds-2020/
| 4
| 10
| 1
| true
| true
| true
|
10,097,534
|
Stack Overflow’s Moderators are Its Last Line of Defense Against AI Junk
|
To say that Stack Overflow has been having a bad year is an understatement. From considerable community backlash for its proposed LLM product, to uproar over its API access changes, the community question answer platform has come under fire since ChatGPT exploded in popularity. However, this isn’t the only reason the site has declined in popularity. New statistics show that Stack Overflow has lost around 50% of its traffic over the past one and a half years. Moreover, it has also experienced a decrease in its lifeblood of questions and answers, which has also reduced by 50%. This also comes at a time where many users of the site feel increasingly strangled by moderation. Even as the site continues to crack down on the quality of its content, a case can be made for the increase in moderation on the website. As the Internet continues to be filled with AI junk, Stack Overflow’s heavily-moderated database of rich user-driven content might be the last bastion of human-generated domain specific-data. Stack Overflow’s unsteady mutiny Even before the launch of ChatGPT in November last year, Stack Overflow was seeing a steady decline in users. This was mainly caused by the company’s new-found attitude towards moderation, which started to veer into the extreme. Hacker News forum member JohnMakin stated, “Moderation on SO has gotten progressively more horrible. Can’t tell you how many times I found the exact, bizarre question I was asking only to see one comment trying to answer it and then a mod aggressively shutting it down for not being “on topic” enough or whatever….Oftentimes the best answer is buried in comments and has very negative feedback despite answering the exact question.” This can largely be traced back to a moderation strike which curators, contributors, and moderators of the site participated in on June 5th 2023. The main objective of this was to protest Stack Overflow’s flip-flopping AI policy, which first led to thousands of posts being removed and hundreds of users being suspended. This was then revoked in May of this year, allowing AI content to be published on the platform, much to moderators’ chagrin. This then led to moderator’s raising the alarm over AI-generated content, believing that it will “over time, drive the value of the sites to zero”. They also argued that the company has ignored the needs of its community, instead focusing on business pivots. Through the strike, they aimed to bring attention to the issues moderators on the site face. While the moderators are currently engaging in a retracted battle against the site’s owners, it seems that they are slowly winning. They have succeeded in bringing in an interim solution on the generative AI front, wherein the AI-generated content will be checked against a set of ‘strong’ and ‘weak’ heuristics, which will determine whether a post should be removed or not. The moderators were also successful in getting Stack Overflow to continue providing access to the data dumps and API access. This battle belies the importance of sticking to human-generated content in the age of AI, especially when the company is trying to make a living selling training data. Saving the golden goose Currently, many developers have turned to using chatbots to solve their programming issues. As algorithms like ChatGPT get better, their capability to logically deconstruct code also becomes more capable. Kartik D, a Senior Backend Developer at MachineHack, said on using Stack Overflow, “Finding the right Stack Overflow answer for an issue is difficult, but it’s easier in ChatGPT. Combining GPT-3.5 and Bard you get a good result, but the suggested results in Bard usually redirect to Stack Overflow.” This shows the impact that Stack Overflow has on the training datasets of large language models like GPT-4. It is well-known information that question-answer sites are some of the richest sources of data, especially for large language models. Not only is the quality of the data high, but it is also structured in a model that could net the best training. User maxlin on the Hacker News forum summarised this perfectly, stating, “Even though StackOverflow in the common use case has been taken over by ChatGPT, I sincerely hope it keeps operating, stays strict (even if it causes collateral) and keeps ban on LLM-generated content…Obviously ChatGPT was trained partly with data only gainable from a healthy StackOverflow-kind of site with users actively asking unique questions and enough people answering those unique questions with well-thought-out answers.” This also echoes the statements of Reddit CEO Steve Huffman, who has stated that Reddit’s ‘corpus of data is really valuable’, as it contains things that people would ‘only ever say in therapy, or A.A., or never at all”. In that way, Stack Overflow also contains answers to some of the most specific technical queries on the Internet, keeping the quality high and up-to-date. If AI content is allowed on the site, the quality of overall content would deteriorate and move away from the carefully worded and constructed answers of today. Moreover, stronger moderation will only increase the quality of the data, which is something Stack Overflow will soon desperately count on as self-debugging LLMs become more prominent.
|
New statistics show that Stack Overflow has lost around 50% of its traffic over the past one and a half years.
|
["AI Features"]
|
[]
|
Anirudh VK
|
2023-07-25T17:34:06
|
2023
| 857
|
["Go", "ChatGPT", "API", "AI", "chatbots", "GPT", "Aim", "generative AI", "R", "AI-generated content"]
|
["AI", "generative AI", "ChatGPT", "Aim", "chatbots", "R", "Go", "API", "GPT", "AI-generated content"]
|
https://analyticsindiamag.com/ai-features/stack-overflows-moderators-are-its-last-line-of-defense-against-ai-junk/
| 3
| 10
| 1
| false
| false
| false
|
10,070,081
|
When Narayana Murthy endorsed tech for Indian cricket team
|
The use of technology in sports is not new. In fact, technology and sports have gone hand in hand for over 100 years now. For example, in 1881, photo finishing was used for a horse race for the first time. When it comes to cricket, technologies like Hawkeye, Snicko or Edge detection, Hotspot and DRS have changed the game for the better. Teams also leverage technology to analyse the player’s fitness levels and performance, and SWAT opponents. However, things were a bit different in the 90s. At MachineCon 2022, Javagal Srinath spoke about how the Indian cricket team used technology. The era of VHS For nearly four decades, from the 70s to the early 2000s, VHS dominated the home box office. The first instance of technology used by the Indian cricket team was in the form of VHS. “Initially, we struggled a lot. The team strategy was not well defined. The team meetings were haphazard. I would say the youngsters used to be reprimanded left right and centre and the seniors got away. The best thing I could ask in those days was VHS cassettes. However, it was a laborious process,” Srinath said. However, analysing the performance on a VHS tape was painful. “We lost patience. We were unable to pinpoint what we wanted to watch. And, we had enough,” he said. Massive dabba with a query screen Srinath tapped into his engineering background. In 2000, after long discussions with his engineer friends, he decided to introduce technology to the Indian cricket team. “So a few of our friends got together and got the help of Phoenix Global Solutions. They helped put up a nice system where we were able to tag each and every ball, and then we had a query screen where we could call the pictures at will. So that was the first time we introduced technology into our game,” Srinath said. While the technology was in place, getting stakeholders on board proved challenging. “It took a long time for us to convince our own players,” Srinath recalled. “Slowly split screen came into existence and we could watch our pictures from the previous matches where we’re doing well, and then when you are struggling then we used to put the pictures together and watch.” The technology was game changing. “That made a vast difference in how I approached my cricket, or some of us approached cricket until then.” Until then, the players didn’t have a lot of insights on their approach to the game. No analytics tools were available to understand their strengths and weaknesses. “You’re being picked for the Indian cricket team, so you are at this level, and hence you got to bowl well. This was the coaching that we got then,” Srinath said. However, things changed for the better. “Later on, we could sit and analyse the strengths and weaknesses.” Woolmer’s Excel sheet Srinath said former English cricketer and coach Bob Woolmer maintained an Excel sheet to note down a batsman’s strengths and weaknesses. Woolmer’s Excel sheet caught Srinath’s attention during a South Africa tour in 1996 and turned out to be the inspiration behind the technology developed by Phoenix Global Solutions in 2020. (Source: ICC) “We were the first ones to put the video into it, and during those days, even the picture cards were huge. We got it from Israel, and getting through customs was a massive thing. Then we built a massive dabba which was going blue every third minute. We used to keep it cool by running a massive machine. Somehow or the other we were able to put these things together, and we went to the BCCI, and then I said this is exactly what we want,” he added. Narayana Murthy’s endorsement Even though Srinath and Co managed to get BCCI on board, the technology was very nascent, and Srinath knew it needed endorsement from a big techie. And back then, there was no bigger name than Infosys founder Narayana Murthy, who also happened to be Srinath’s neighbour. “Those days we didn’t have enough people who could understand and appreciate technology. They would just ridicule it. So I went, spoke to him, and said: Look, sir, we need your endorsement.” (Source: ICC) Murthy demanded to see the technology. Srinath and co took the tool to his house and gave a presentation. The tool was loaded with 200 clips of Sachin Tendulkar playing on the offside. The query was pretty simple, and it was enough to get the attention of Narayana Murthy. “Once we did the entire presentation for the press and for the BCCI, and with Narayana Murthy speaking in favour, nobody said a word,” Srinath said.
|
Technology made a vast difference in the way I approached my cricket, Srinath said.
|
["AI Features"]
|
[]
|
Pritam Bordoloi
|
2022-06-29T16:00:00
|
2022
| 778
|
["Go", "programming_languages:R", "AI", "programming_languages:Java", "RAG", "Ray", "analytics", "CLIP", "R", "Java"]
|
["AI", "analytics", "Ray", "RAG", "R", "Go", "Java", "CLIP", "programming_languages:R", "programming_languages:Java"]
|
https://analyticsindiamag.com/ai-features/when-narayana-murthy-endorsed-tech-for-indian-cricket-team/
| 2
| 10
| 0
| false
| true
| false
|
10,092,403
|
MetaGPT — Realising the GPT-4 Dream
|
During the launch of GPT-4, OpenAI’s researchers showed that the LLM could create a website from scratch using just a sketch on paper as a reference. Even as users dream of creating a website from the outset using the power of GPT-4, OpenAI has still not released this capability of their multimodal LLM. However, Pico Apps’ MetaGPT seems to have taken steps to realise this dream, albeit from a different angle. This GPT-4-powered application can create websites, apps, and more based only on natural language prompts. The service has been used to create dashboards, code-based visualisations, and even a marriage proposal! What is MetaGPT? Simply put, MetaGPT is a web application that allows users to build other web applications. The service first asks users what they want to create, and takes the prompt as a basic idea of what the website can be. MetaGPT then asks for a few additional details, such as the required inputs from the user. The part that makes MetaGPT stand out from other no-code website-building platforms is its integration with ChatGPT. Upon taking the initial prompt and inputs from the user, we can choose to integrate ChatGPT’s functionalities into the application. The prompts encased within curly brackets will be passed on to ChatGPT, which will then generate the required text. This can be done completely without any code or API calls, relying on natural language prompts to serve the user’s purpose. The service also allows users to iterate on their prompts, showing a visual representation of what the website looks like while GPT-4 codes it. Then, the user is given the option to iterate on the output of the chatbot, with the website recommending the user to go through multiple iterations to reach a good application. These iterations can range from UI/UX changes to bug fixes to complete redesigns of the site. We tried building a basic website that generates an op-ed given a topic and the desired word length. We prompted the application with the simple sentence “an application that can write an op-ed”. After this prompt, the web app clarified a few additional details, such as what the user input should be and what syntax should be used to pass on the work to ChatGPT. This advancement in web applications builds upon the promise offered by GPT-4, which OpenAI is still in the process of deploying safely. However, it seems that the AI world is hungry for innovation, and it isn’t waiting for OpenAI to fulfil its dreams. Taking over OpenAI Shortly after the launch of GPT-4, OpenAI released ChatGPT plugins. In a move which many called the ‘App Store’ moment for LLMs, the company not only released 12 plugins which allowed the chatbot to extend its functionality, but also released a standard that would allow developers to create more plugins. However, the expectations for this feature have slowly eroded, as plugins continue to be available for a small percentage of ChatGPT’s users. What’s more, the feature is only available to ChatGPT Plus users, with others needing to join a waitlist for access. The developer community has found novel ways to deploy the GPT-4 API, picking up on OpenAI’s slack. One only needs to look at the success of AutoGPT, an open-source project looking to allow GPT-4 to function autonomously. Other similar projects include BabyAGI, a GPT API powered task management system, and AgentGPT, a platform to create autonomous AI agents to automate repetitive tasks. These open-source projects have captured lightning in a bottle, igniting the imaginations of many who wish to use GPT-4 for new use-cases. The hype created by OpenAI around the launch of GPT-4 has not died away, but shifted towards these community-driven projects, as seen by the runaway success of AutoGPT, MetaGPT, Baby AGI and others. As OpenAI continues to delay the launch of GPT-4 features like multimodality and ChatGPT Plugins, the community is working hard to find ways to deploy this powerful LLM in increasingly innovative ways. While some are just wrappers of OpenAI’s APIs with added functionality like Forefront.ai or AnonChatGPT, others, like MemeCam or Bing Chat use the GPT-4 API to facilitate new use-cases altogether. OpenAI now needs to move faster, or risk their dream being stolen by others who are on the bleeding edge.
|
AutoGPT, and now MetaGPT, have realised the dream OpenAI gave the world
|
["Deep Tech"]
|
["GPT-4"]
|
Anirudh VK
|
2023-04-26T12:45:00
|
2023
| 709
|
["BabyAGI", "Go", "ChatGPT", "API", "TPU", "OpenAI", "AI", "MetaGPT", "GPT-4", "R", "AutoGPT"]
|
["AI", "ChatGPT", "OpenAI", "AutoGPT", "BabyAGI", "MetaGPT", "TPU", "R", "Go", "API"]
|
https://analyticsindiamag.com/deep-tech/metagpt-realising-the-gpt-4-dream/
| 3
| 10
| 0
| true
| false
| true
|
10,052,426
|
The ICCV 2021 Best Papers Have Been Announced
|
ICCV (IEEE International Conference on Computer Vision) 2021 announced the Best Paper Awards, honourable mentions, and Best Student Paper. ICCV is one of the premier international biennial computer vision conferences, featuring the main conference track and many tracks of workshops and tutorials. This year’s conference was held entirely online. The following articles represent the best papers presented at ICCV 2021: let’s take each of these articles in turn and examine their significance. Title: Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields Researchers Jonathan T. Barron – Google Ben Mildenhall – Google Matthew Tancik – UC Berkeley Peter Hedman – Google Ricardo Martin-Brualla – Google Pratul P. Srinivasan – Google Synopsis Neural radiance fields’ (NeRF) rendering technique samples a scene with a single ray per pixel, which may result in overly fuzzy or aliased renderings when training or testing photos to detect scene content at differing resolutions. The researchers introduced mip-NeRF, a multiscale NeRF-like model that accounts for NeRF’s inherent aliasing. NeRF operates by projecting rays, recording the positions of points along with those rays, and training unique neural networks at different scales. By comparison, mip-NeRF represents the image at different scales by casting cones, encoding the positions and sizes of conical frustums, and training a single neural network. Additionally, Mip-NeRF can match the accuracy of a bruteforce supersampled NeRF variation while being 22 times faster. The researchers expect that the general strategies provided here will benefit other researchers attempting to improve the performance of neural rendering models based on raytracing. (Read here) Title: OpenGAN: Open-Set Recognition via Open Data Generation Researchers Shu Kong – Carnegie Mellon University Deva Ramanan – Carnegie Mellon University, Argo AI Synopsis The researchers developed OpenGAN for open-set recognition by incorporating two technical insights: 1) training a classifier on OTS characteristics rather than pixels, and 2) adversarially synthesising fake open data to increase the pool of open-training data. With OpenGAN, the researchers demonstrate that employing a GAN-discriminator to accomplish state-of-the-art open-set discrimination is possible once a val-set of genuine outlier examples is used to pick the GAN-discriminator. OpenGAN is effective even when the outlier validation cases are small in number or highly skewed. Both open-set picture recognition and semantic segmentation are greatly improved with OpenGAN. (Read here) Title: Viewing Graph Solvability via Cycle Consistency Researchers Federica Arrigoni – University of Trento Andrea Fusiello – University of Udine Elisa Ricci – University of Trento, Fondazione Bruno Kessler Tomas Pajdla – CIIRC CTU in Prague Synopsis The researchers examined the solvability of viewing graphs, i.e., whether they can determine projective cameras individually, and produced several significant improvements in the theory and practical use of viewing graphs. Additionally, the researchers analysed basic graphs with up to 90 vertices, which sets the bar for uncalibrated graph processing. The researchers examined the concept of solvability in this work, which is entirely dependent on the topology of the viewing graph. Adding additional information would result in a new notion of solvability, which might be fascinating to investigate in the future. Connecting this to the calibrated example would also be an intriguing area of future investigation. Apart from its theoretical significance, the solvability problem has a practical consequence, as reconstruction methods benefit from knowing in advance whether the graph under consideration is solvable or not. If the problem is ill-posed, no method will produce a suitable solution. Finding a maximal subgraph that is solvable would be of significant relevance in this scenario. (Read here) Title: Common Objects in 3D: LargeScale Learning and Evaluation of Real-life 3D Category Reconstruction Researchers Jeremy Reizenstein – Facebook AI Research Roman Shapovalov – Facebook AI Research Philipp Henzler – University College London Luca Sbordone – Facebook AI Research Patrick Labatut – Facebook AI Research David Novotny – Facebook AI Research Synopsis The researchers have released Common Objects in 3D (CO3D), a dataset of in-the-wild object-centric films containing 50 object categories annotated with a camera and point cloud data. Additionally, they submitted NerFormer, a hybrid of Transformer and neural implicit rendering capable of reconstructing 3D object categories from CO3D with a higher degree of accuracy than a total of 14 other baselines assessed. CO3D collection continues at a steady clip of 500 videos per week, which the researchers intend to distribute shortly. (Read here) Conclusion The International Conference on Computer Vision (ICCV 2021) brings together the international community focused on computer vision. The virtual platform for ICCV 2021 will be the optimal venue for engaging with this unique community’s most recent research and ideas. ICCV 2021 is the primary international computer vision event, consisting of the main conference and many co-located seminars and tutorials. It delivers an amazing value for students, academics, and industry researchers due to its high quality and inexpensive cost.
|
A sneak preview of the top ten papers accepted for presentation at ICCV 2021.
|
["AI Trends"]
|
["Computer Vision", "Neural Networks"]
|
Dr. Nivash Jeevanandam
|
2021-10-27T15:00:00
|
2021
| 785
|
["Go", "AI", "neural network", "computer vision", "RAG", "GAN", "Ray", "Computer Vision", "Rust", "CLIP", "R", "Neural Networks"]
|
["AI", "neural network", "computer vision", "Ray", "RAG", "R", "Go", "Rust", "CLIP", "GAN"]
|
https://analyticsindiamag.com/ai-trends/iccv-2021/
| 4
| 10
| 1
| false
| true
| true
|
10,096,983
|
OpenAI’s Malevolent Plan Backfires
|
Soon after the launch of GPT-4, OpenAI CEO Sam Altman appeared in Congress to ‘educate’ regulators on the potential harms of AI. From taking away jobs to stating that safety is ‘vital’ to OpenAI’s work, Altman perfectly played the role of a measured AI doomer to whip regulators into a frenzy to curtail the quickly-growing AI market. While many criticised this move as a way to increase OpenAI’s lead in the ecosystem, it now seems like this has backfired. According to reports, the FTC (federal trade commission) has opened an expansive investigation into OpenAI’s activities, mainly over concerns of personal reputations and risks of leaking personal data. Earlier this week, the regulator sent OpenAI a document detailing its concerns over the company’s products. This not only underscores the AI company’s hypocrisy, but also represents a strong regulatory threat against it, possibly putting an end to the free rein it has been enjoying in the emerging AI market. FTC’s opening salvo Taking a look at the document released by the Washington Post, the FTC has requested a variety of information from the company, including the whole database of third-parties using its APIs. What’s more, the regulator has even asked OpenAI to pull back the curtain on their top models, asking the tech giant to describe in detail research regarding its products. The FTC has also requested the training data OpenAI used, as well as information on the reinforcement learning from human feedback process. It has also requested OpenAI to shed some light on the ChatGPT security issue that occurred in March of this year, which allowed some personal information to be leaked. Other requested information includes details on the process of retraining and refraining LLMs, risk and safety assessments, personal information protection. This is the meat of the request, as the FTC’s concerns are made clear in Section 24 of the interrogatories section. The regulator also wants to know the capacity of OpenAI’s LLMs to generate statements about individuals, especially statements containing personal information. While the regulator has also expressed concerns over the LLMs’ capacity to make ‘misleading, disparaging, or harmful statements’, the crux of the matter resides within how OpenAI is handling personal information. This also goes in line with the FTC’s commitment to stick to the current civil rights laws on discrimination. FTC Chair Lina Khan has specifically stated that “There is no AI exemption to the laws on the books”, suggesting that the FTC will stick to the current regulatory framework until the Biden administration creates a new one. The FTC’s moves have put Altman on the backfoot, as evidenced by his tweet thread on the matter. While decrying the fact that the FTC’s request was leaked to the press, he stated, “It’s super important to us that our technology is safe and pro-consumer, and we are confident we follow the law. Of course we will work with the FTC.” This comment stands in line with the other messages Altman has been sending to regulators, speaking about how AI is risky while OpenAI’s products are built ‘on top of years of security research’. This is a narrative we’ve seen before, backed up in this instance by a reiteration that OpenAI is not ‘incentivised to make unlimited returns’ due to its capped-profits structure. Tricking regulators no more? Sam Altman has been on a global charm offensive to convince regulators of the potential impact of AI algorithms. Calling it a ‘diplomatic mission’, the CEO has taken it upon himself to be the champion of AI to the world’s regulators. This strategy seems to be a leaf out of lobbyists’ books, curtailing regulation for one company while constraining the market with heavy-handed laws. Hidden behind his meetings with global regulators is a sinister agenda to expand OpenAI’s products all over the world with as little regulatory oversight as possible. Reports have emerged that Altman has lobbied the EU to water down its stringent AI Act to allow OpenAI a freer hand in the data privacy-centric EEA. What’s worse, the strategy actually worked, as the latest draft of the Act does not classify GPT as a high-risk system, in line with OpenAI’s requests for the same. Under the new act, providers of foundational models need to only comply with a small handful of requirements, not the stringent regulation they faced as high-risk systems. Sarah Chander, a senior policy advisor at European Digital Rights, stated on the move, “They got what they asked for…OpenAI, like many Big Tech companies, have used the argument of utility and public benefit of AI to mask their financial interest in watering down the regulation.” While Altman outwardly has asked for the AI field to be regulated as a whole, it seems that he is ensuring exceptions can be made for OpenAI’s financial gain. This means that OpenAI will be allowed to ‘self-regulate’ where other companies bow down to the needs of regulators. Now, it seems that the FTC has caught on to this game, going after the biggest fish in the sea for its first catch. With the inquiry into OpenAI, the FTC has indirectly revealed that they have seen through Altman’s guise, as they are striking directly into the heart of the matter. The company is currently under fire for multiple copyright violations, which the FTC has used as an inroad to raise concerns over OpenAI’s handling of personal information. All in all, it seems as though a storm is brewing on the horizon for OpenAI, and Altman is in the centre of it all.
|
According to reports, the FTC (federal trade commission) has opened an expansive investigation into OpenAI’s activities.
|
["AI Features"]
|
[]
|
Anirudh VK
|
2023-07-15T13:00:00
|
2023
| 919
|
["Go", "ChatGPT", "API", "AWS", "OpenAI", "AI", "Git", "RAG", "GPT", "R"]
|
["AI", "ChatGPT", "OpenAI", "RAG", "AWS", "R", "Go", "Git", "API", "GPT"]
|
https://analyticsindiamag.com/ai-features/openais-malevolent-plan-backfires/
| 3
| 10
| 1
| false
| false
| false
|
10,165,140
|
Amazon to Launch New Reasoning Model by June, To Rival OpenAI o1, Claude 3.7 Sonnet
|
Amazon is set to release a new reasoning model under its Nova branding by June this year, Business Insider reported. The model will function with a hybrid approach, meaning it can provide quick responses, or use ‘extended thinking’ for more complex queries. It is also reported that Amazon aims to make the model more cost efficient than OpenAI’s o1, Gemini 2.0 Flash Thinking, and even Claude 3.7 Sonnet from Anthropic – the AI startup it has actively invested in. Furthermore, Amazon aims to rank its upcoming reasoning model among the top five spots in various benchmarks. This means that Amazon will be the latest company to join the bandwagon of reasoning models, which was once started by OpenAI with the o1. OpenAI also released the o3 family of reasoning models last year, which claimed the top spot on several benchmarks. A few weeks ago, Chinese AI maker DeepSeek caused quite a storm in both the AI ecosystem, and the US stock market with its high performance, and cost-efficient R1 reasoning model. Recently, Elon Musk’s xAI, and Anthropic also released models with reasoning, or ‘thinking’ capabilities. Last year, Amazon launched its family of Nova AI models on Bedrock, namely Nova Micro, Lite, Pro, and Premier. Each model is optimised for specific tasks, ranging from text summarisation and translation to complex document processing and multimodal interactions. “They are really cost-effective and about 75% less expensive than the other leading models in Bedrock,” Amazon chief Andy Jassy said. Recently, Amazon announced Alexa+, a next-generation personal assistant powered by Anthropic Claude, which will be available for free to Prime members. “Alexa+ is more conversational, smarter, personalised, and helps you get things done,” Panos Panay, senior vice president of devices and services at Amazon, said. Amazon has infused Alexa+ with LLMs to improve knowledge retrieval. Users can upload documents, emails, or images for Alexa+ to analyse and summarise. “For example, users can send a photo of a live music schedule, and Alexa+ will add the details to their calendar,” the company said.
|
Amazon reportedly aims to make the model more cost efficient than OpenAI’s o1, Claude 3.7 Sonnet and Gemini 2.0 Flash Thinking.
|
["AI News"]
|
["AI (Artificial Intelligence)", "AI reasoning", "Amazon"]
|
Supreeth Koundinya
|
2025-03-05T09:48:58
|
2025
| 337
|
["Anthropic", "Go", "Gemini 2.0", "OpenAI", "AI", "AI reasoning", "R", "Amazon", "XAI", "Aim", "xAI", "AI (Artificial Intelligence)", "startup"]
|
["AI", "OpenAI", "Anthropic", "Gemini 2.0", "xAI", "Aim", "R", "Go", "XAI", "startup"]
|
https://analyticsindiamag.com/ai-news-updates/amazon-to-launch-new-reasoning-model-by-june-to-rival-openai-o1-claude-3-7-sonnet/
| 2
| 10
| 3
| true
| true
| false
|
10,051,798
|
Indian Institute of Technology Patna Introduces B Tech Program In Artificial Intelligence and Data Science
|
The Indian Institute of Technology, Patna (IIT-P), recently announced that it will introduce three new courses under the four-year B Tech program for the academic session in early 2021. The new courses are Artificial Intelligence and Data Science, Engineering Physics, and Bachelor of Science in Mathematics and Computing. The institute has launched new programs to meet the rising demand for new-age technology and professionals from these fields. Admission to these courses will be through JEE-Advanced 2021, the results of which have already been announced on October 15. “The institution has allocated 36 places for admission to Artificial Intelligence and Data Science, 36 for Engineering Physics and 48 for BS in Mathematics and Computer Science. The institute has increased its intake capacity to 547 from last year’s 427 seats, with an additional 10% of excess seats reserved for foreign candidates,” said Rajendra Paramanik, Public Relations Professor, IIT-P. Elaborating on the details of the newly launched courses, the university officials said that the Data Science and Artificial Intelligence discipline teaches handling huge data volumes. On the other hand, the Engineering Physics course is a blend of engineering, physics and mathematics, while Mathematics and Computing Discipline is designed by combining mathematical and analytical components. Officials said that these new programs would also provide students with abundant internships and jobs across the country and abroad. IIT-P already offers B Tech programs in various disciplines, including computer science and engineering, electrical engineering, mechanical engineering, chemical engineering, civil engineering and metallurgical and materials.
|
Admission to these courses will be through JEE-Advanced 2021, the results of which have already been announced on October 15.
|
["AI News"]
|
["AI (Artificial Intelligence)", "covid-19", "Data Science", "data science programs", "Data Scientist", "Deep Learning", "IIT", "Machine Learning", "Python"]
|
Victor Dey
|
2021-10-18T13:30:49
|
2021
| 247
|
["data science", "IIT", "artificial intelligence", "covid-19", "AI", "programming_languages:R", "Machine Learning", "data science programs", "Python", "Deep Learning", "Data Science", "Data Scientist", "R", "AI (Artificial Intelligence)"]
|
["AI", "artificial intelligence", "data science", "R", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-news-updates/indian-institute-of-technology-patna-introduces-b-tech-program-in-artificial-intelligence-and-data-science/
| 2
| 5
| 0
| false
| false
| false
|
10,047,330
|
Top AI Communities On Discord That every Data Scientist Should Join
|
Discord has proven to be an essential forum to facilitate discussions, ask queries, learn with like-minded people, and create a community of data science enthusiasts. The VoIP and instant messaging platform allow members to chat on private servers for specific topics. In this article, we introduce you to the top 10 Discord servers to join for AI/ML and data science discussions. 1. Analytics India Magazine The Analytics India Magazine (AIM) server is the official Discord for AIM. This one is an all-in-one forum for AI enthusiasts to discuss and learn about career options, upcoming events, and news. With specific discussion channels including ‘developers corner’, ‘hackathons’, ‘job openings’, ‘events’, ‘career advice’, and ‘courses & education, this server is a great resource for individuals hoping to build a career in data science or learn more about it. Join the server here. 2. The Data Share The Data Share Discord server is a project for the Towards Data Science community and an open area for any data science enthusiast to discuss their projects and problems. The server is a valuable resource for asking data science questions, obtaining resources, gaining feedback, and matching with a community of shared interests. “We cover all the core elements of the field, including machine learning, natural language processing, and data engineering,” TDS’ blog post states. Join the server here. 3. Learn AI Together With almost 4,000 members, Learn AI Together is a bubbling server for AI enthusiasts. The group facilitates sharing papers, projects, Kaggle competitions, and various courses. In addition, themes like weekly discussions on the latest news and releases in AI are active on this server. It is an excellent resource for enthusiasts wanting to learn. Join the server here. 4. CS Dojo The CS Dojo server is Youtuber CS Dojo’s social community on Discord. One of the most popular programming YouTubers, Dojo holds community discussions on programming, game development, web development, and AI/ML. This Discord server has over 4000 members discussing data science and ML topics, asking queries, and gaining feedback. Join the server here. 5. /r/LearnMachineLearning Based on the subreddit page “/r/LearnMachineLearning”, this server is one of the most massive data science communities on Discord, with more than 6000 members. The colossal member size allows for the quickest answers to ML questions. The server also consists of study rooms and channels related to popular data science courses that can be especially useful for beginners. Join the server here. 6. Data Science Data scientists made the server itself to create a community that explores all the data science fields. It can facilitate discussions with members with different levels of expertise. The server’s 1000 members are split into three categories; ‘share’, ‘ask’ and ‘off topic’ to ensure an organised conversation. Join the server here. 7. Tech with Tim With over 30,000 members, this server is hosted by the famous YouTuber Tim Ruscica, a computer science graduate and owner of the channel Tech with Tim. It is a space for enthusiasts to resolve queries, project suggestions and discuss programming, software engineering, ML, python and javascript. Join the server here. 8. Tensorflow The Tensorflow Discord server has a vast member size and welcomes beginners trying out Tensorflow. In addition, the server has specific discussion channels related to Tensorflow, ML, genetic algorithms, GANs, and neural networks. It is also one of the few servers with a track on AI ethics, providing a healthy and comfortable environment for ML enthusiasts and beginners. Join the server here. 9. Python The Python Discord is one of the top largest communities on Discord. Focused on the Python programming language, the community facilitates discussions on it. Also, an energetic server, the community on Discord organises regular team events like code jams, open-source hackathons, seasonal events, and community challenges with prizes for the winners. Join the server here. 10. Fundamentals ML The Fundamentals ML server is a unique forum for enthusiasts interested in the math behind ML. While the server is considerably small in size, it consists of members from all levels of experience. They share experiences and knowledge, including specific sections on ML & math, discussing topics from basic linear regression to neural networks. Join the server here. 11. Artificial Intelligence Community The Artificial Intelligence Community is one of the oldest servers on Discord, created in 2017. The group is steadily growing and creating a community of people interested in AI, ML, language processing, and vision & speech. Join the server here.
|
With themes like weekly discussions on the latest news and releases in AI, this server is an excellent resource for enthusiasts wanting to learn.
|
["AI Trends"]
|
["AI community", "discord", "open source data science projects"]
|
Avi Gopani
|
2021-08-30T18:00:00
|
2021
| 737
|
["data science", "artificial intelligence", "open source data science projects", "AI community", "AI", "machine learning", "neural network", "ML", "TensorFlow", "Python", "Aim", "analytics", "discord"]
|
["AI", "artificial intelligence", "machine learning", "ML", "neural network", "data science", "analytics", "Aim", "TensorFlow", "Python"]
|
https://analyticsindiamag.com/ai-trends/top-ai-communities-on-discord-that-every-data-scientist-should-join/
| 4
| 10
| 0
| false
| false
| true
|
50,541
|
CJI Bobde Supports Use Of AI In The Indian Judicial System
|
A stand that potentially revolutionize the Indian legal system, Chief Justice of India, SA Bobde, admitted that the use of artificial intelligence in the judiciary system will deliver justice swiftly. Speaking at a function organised by Supreme Court lawyers, Justice Bobde said, “I believe exploring this interface would be immensely beneficial for many reasons. For instance, it would allow us to streamline courts caseloads through enabling better court management. This would be a low-hanging fruit. On the other end of the spectrum, it will allow us to shift the judicial time from routine-simple-straightforward matters (e.g. cases which are non-rivalrous) and apply them to more complex-intricate matters that require more human attention and involvement.” Even though there is a keen interest around leveraging AI adoption in the legal sector, India is yet to mainstream the technology. One of the primary reasons for slow AI adoption in India is the lack of digitisation of data available. For example, India’s leading firm Cyril Amarchand Mangaldas recently partnered with Kira Systems to leverage the power of AI for contract analysis. On the other end of the spectrum are startups like CaseMine and NearLaw who are applying emerging technologies to legal research and contract analysis.
|
A stand that potentially revolutionize the Indian legal system, Chief Justice of India, SA Bobde, admitted that the use of artificial intelligence in the judiciary system will deliver justice swiftly. Speaking at a function organised by Supreme Court lawyers, Justice Bobde said, “I believe exploring this interface would be immensely beneficial for many reasons. For […]
|
["AI News"]
|
[]
|
Prajakta Hebbar
|
2019-11-22T23:51:46
|
2019
| 200
|
["artificial intelligence", "programming_languages:R", "AI", "ML", "Git", "RAG", "GAN", "R", "startup"]
|
["AI", "artificial intelligence", "ML", "RAG", "R", "Git", "GAN", "startup", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-news-updates/cji-bobde-supports-use-of-ai-in-the-indian-judicial-system/
| 2
| 9
| 1
| true
| true
| true
|
10,138,283
|
AMD Believes in Realistic Goals, Not Overpromising
|
AMD is gradually finding its footing in the AI data centre market, emerging as a strong competitor to NVIDIA with its latest AI accelerators and expanding partnerships. At the Advancing 2024 event, the company introduced its new MI325X accelerators for training and inferencing LLMs. The AMD Instinct MI325X accelerators offer leading memory capacity and bandwidth, featuring 256GB of HBM3E with 6.0TB/s throughput—1.8x more capacity and 1.3x higher bandwidth than the NVIDIA H200. They also deliver 1.3x higher peak theoretical FP16 and FP8 compute performance than the H200. When asked how AMD compares itself to NVIDIA, Andrew Dieckmann, CVP & GM of data centre GPU at AMD, told AIM that they benchmark against NVIDIA’s highest-performing solution. “We are trying to take very representative benchmarks that are realistic. I can tell you that in our customer engagements, especially regarding inference workloads, we have yet to find a single workload that we cannot outperform NVIDIA on,” he said, adding that AMD doesn’t always outperform NVIDIA. “However, if we optimise for a specific solution, we can beat them.” The company also previewed the upcoming AMD Instinct MI350 series accelerators, scheduled for release in 2025, promising a 35x improvement in inference performance over current models and featuring up to 288GB of HBM3E memory. Furthermore, the company plans to launch the MI400 in 2026. Interestingly, on the same day, NVIDIA made headlines by delivering its much-anticipated Blackwell GPUs to OpenAI and Microsoft. Microsoft announced that Azure is the first cloud platform to run NVIDIA’s Blackwell system with GB200-powered AI servers. “Our long-standing partnership with NVIDIA and deep innovation continues to lead the industry, powering the most sophisticated AI workloads,” said Microsoft CEO Satya Nadella. According to a recent report, NVIDIA’s Blackwell GPUs are sold out for the next 12 months, reflecting a similar supply situation that occurred with Hopper GPUs several quarters ago. Consequently, NVIDIA is anticipated to gain market share next year. In the latest quarter, AMD reported revenue of $5.8 billion, while NVIDIA continues to dominate the AI chip market with an impressive quarterly revenue of $30 billion. AMD Fills the Gap With NVIDIA’s GPUs sold out for the next year, AMD has an ideal opportunity to meet the demand from customers seeking access to compute resources for training and running LLMs. AMD CEO Lisa Su expects the data centre AI accelerator’s total addressable market (TAM) to grow by more than 60% annually, reaching $500 billion by 2028. “For AMD, this represents a significant growth opportunity,” she said. According to her, AMD GPUs are well-suited for running open-source models like Meta’s Llama 3.1 and Stable Diffusion, outperforming NVIDIA’s H200. “When you look at that across some of the key models, we’re delivering 20 to 40% better inference performance and latency on models like Llama and Mixtral,” said Su in her keynote address. “The MI325 platform delivers up to 40% more inference performance than the H200 on Llama 3.1. Many customers are also focused on training, and we’ve made significant progress in optimising our software stack for training,” she added. Moreover, to challenge NVIDIA’s CUDA, AMD launched ROCm 6.2, which introduces support for essential AI features such as the FP8 datatype, Flash Attention 3, Kernel Fusion, and more. These updates enable ROCm 6.2 to deliver up to a 2.4X performance boost in inference and a 1.8X improvement in training across a range of LLMs compared to ROCm 6.0. “ROCm is a complete set of libraries, runtime compilers, and tools needed to develop and deploy AI workloads. We designed ROCm to be modular and open-source, allowing for rapid contributions from AI communities,” said Vamsi Bopanna, SVP of AI at AMD, adding that it is also designed to connect easily with ecosystem components and frameworks like PyTorch and model hubs like Hugging Face. He explained that they have expanded support for newer frameworks like Jax and implemented powerful new features, algorithms, and optimisations to deliver the best performance for generative AI workloads. AMD also supports various open-source frameworks, including vLLM, Triton, SGlang, and ONXX Runtime and more. Bopanna revealed that today, over 1 million Hugging Face models run on AMD. The company recently acquired European’s private AI lab Silo AI. “We recently completed the acquisition of Silo AI, which adds a world class team with tremendous experience training and optimising LLMS and also delivering customer specific AI solutions,” said Su. At the event, AMD showcased testimonials for ROCm by inviting startup leaders, including Amit Jain, the CEO of Luma AI; Ashish Vashwani, the CEO of Essential AI; Dani Yogatama, the CEO of Reka AI, and Dmytro Dzhulgakov, the CTO of Fireworks AI. Luma AI recently launched a video generation model called Dream Machine. “The models we’re training are very challenging and don’t resemble LLMs at all. However, we’ve been impressed with how quickly we were able to get the model running on ROCm and MI300X GPUs. It took us just a few days to establish the end-to-end pipeline, which is quite fantastic,” said Jain. More Customers AMD is partnering with customers including Meta, Microsoft, xAI, Oracle, and Cohere, among others. Su highlighted Oracle as a key customer for AMD’s latest GPUs. “They’ve integrated AMD across their entire infrastructure, using our CPUs, GPUs, and DPUs,” she said. Oracle SVP Karan Batta joined Su on stage to discuss how Oracle’s customers are utilising AMD’s hardware tech stack. “Our largest cloud-native customer is Uber. They use Oracle Cloud Infrastructure (OCI) Compute E5 instances with 4th generation AMD EPYC processors to achieve significant performance efficiency. Almost all of their trip-serving infrastructure now runs on AMD within OCI compute,” said Batta. “We also have Red Bull Powertrains developing the next generation of F1 engines for upcoming seasons. Additionally, our database franchise is now powered by AMD CPUs. Customers like PayPal and Banco do Brasil are using Exadata powered by AMD to enhance their database portfolios,” he added. Alongside Oracle, Databricks is another major customer of AMD. “The large memory capacity and incredible compute capabilities of MI300X have been key to achieving over a 50% increase in performance on some of our critical workloads,” said Naveen Rao, VP of generative AI at Databricks, adding that this includes models like Llama and other proprietary models. Microsoft, the first cloud provider to receive NVIDIA Blackwell GPUs, is also partnering with AMD to obtain the new MI325 accelerators. “We’re very excited to see how the teams are coming together. OpenAI, Microsoft, and AMD are all working to accelerate the benefits so that this technology can diffuse even faster. We look forward to the roadmap for the MI350 and the next generation after that,” said Nadella.
|
AMD CEO Lisa Su expects the data centre AI accelerator total addressable market (TAM) to grow by more than 60% annually, reaching $500 billion by 2028.
|
["Global Tech"]
|
["AMD"]
|
Siddharth Jindal
|
2024-10-14T18:50:35
|
2024
| 1,099
|
["CUDA", "AMD", "Hugging Face", "OpenAI", "AI", "PyTorch", "Azure", "Aim", "generative AI", "JAX", "xAI"]
|
["AI", "generative AI", "OpenAI", "xAI", "Aim", "PyTorch", "JAX", "Hugging Face", "Azure", "CUDA"]
|
https://analyticsindiamag.com/global-tech/amd-believes-in-realistic-goals-not-overpromising/
| 4
| 10
| 3
| true
| false
| false
|
10,056,082
|
Post COVID-19 Business Model For Data Science Companies
|
The COVID-19 pandemic negatively affected millions of professionals working in the data and analytics fields – everything from consumer behaviour to supply chains was disrupted, and the economic fallout is furthering the damage. However, this crisis has also exposed technology’s Achilles’ heel. After the vaccines for COVID-19 were developed, the next normal emerged, allowing leaders to move from survival mode to a more secure position. Now is the time to reimagine and reform the business model of data science companies. “It’s critical for business leaders to understand that large-scale shifts are changing how people work and how business gets done,” says Brian Kropp, Distinguished Vice President, Gartner. “Leaders who respond effectively to these HR trends can ensure their organizations stand out from competitors,” he added. Currently, these companies are starting to structure their analytics so organizations won’t face the same model challenges as they saw during the pandemic. A company should incorporate strategic, durable execution in this period. These are the key activities: Discover new, repeatable, scalable processes and workflows for managing operations.Use the lessons learned and patterns from prior phases to formulate a new foundation and path forward. Source: Gartner Some basic business model customizations include: 1. Deploy a digital nerve centre Digital nerve centres that act as a critical link between digitalized operations, processes and assets, short term operational efficiency and long term strategy have become a key capability during COVID-19. They allow companies to mobilize resources, such as new data sources and analytics systems, to enable business teams to analyze emerging trends more quickly, shorten feedback cycles, and gain more insight into possible outcomes. For example, An international retailer with grocery stores in 15 countries uses a digital nerve centre to provide key business functions – supply chain, employee protection, finance, customer and store operations, and digital channel operations – with rapid access to data about the business, customers, and suppliers. As a result, supply-chain leaders can keep store shelves stocked, even with high-demand items. 2. Embrace real-time data Monitoring real-time data from websites, social media, clickstreams, and mobile apps has become increasingly important in recent months. A leader no longer has the luxury of waiting days and weeks for the latest information. Various technologies, including messaging platforms and stream-processing capabilities, enable real-time data processing and analysis; use of the hybrid cloud allows decision-makers to respond in hours instead of days or weeks. 3. Prioritize cultural shifts The pandemic taught many leaders that their organizations could be more agile than they realized they were during a crisis. A growing number of interdisciplinary teams, agile working methods, and data-driven mindsets have sprouted overnight, creating highly targeted and fruitful analytics capabilities. Keeping the momentum going will require cultivating these shifts — such as reskilling workers. Such work is still possible while employees work remotely. As part of its preparation for the future, one financial-services company used Zoom video training to teach senior executives about AI concepts, ways to use the technology, and tips for implementing change. Organizations can be more accurate and faster at predicting the changing needs of their customer communities by having a diverse workforce. 4. Adopt a compliant design Analytical development teams can enhance risk management and detection with various activities and tools, allowing them to build critical oversight into the process. For example, documented guidelines, checklists, and training materials are available to set up diverse teams, use risk metrics, and stay on top of changes, such as changes in policies, laws, and regulations. Activities include putting in place methods and data tools for detecting and mitigating risk in data and monitoring models. There is no time for complacency or nostalgia in this new world. What was once normal cannot be restored; neither risk nor opportunity is small in this new era. In order to deal with constant uncertainty, disruption, and ever-changing environments, leaders must prepare organizations to thrive in this new environment.
|
The COVID-19 pandemic negatively affected millions of professionals working in the data and analytics fields – everything from consumer behaviour to supply chains was disrupted, and the economic fallout is furthering the damage. However, this crisis has also exposed technology’s Achilles’ heel. After the vaccines for COVID-19 were developed, the next normal emerged, allowing leaders […]
|
["IT Services"]
|
["AI (Artificial Intelligence)", "covid-19", "Data Science", "Data Scientist", "Machine Learning"]
|
Sohini Das
|
2021-12-19T13:00:00
|
2021
| 647
|
["data science", "Go", "API", "covid-19", "AI", "AWS", "Machine Learning", "Scala", "Git", "analytics", "GAN", "Data Science", "Data Scientist", "R", "AI (Artificial Intelligence)"]
|
["AI", "data science", "analytics", "AWS", "R", "Go", "Scala", "Git", "API", "GAN"]
|
https://analyticsindiamag.com/it-services/post-covid-19-business-model-for-data-science-companies/
| 2
| 10
| 1
| true
| true
| false
|
10,018,643
|
Why The Time Is Ripe For Starting A Career In Cloud
|
The cloud market is one of the few sectors that has seen skyrocketing growth in the wake of the pandemic. Cloud computing is brimming with potential and is projected to grow 18.4% in 2021, with an estimated total revenue of $304.9 billion. The Covid-induced remote working scenarios has resulted in a proportional rise in cloud computing services. The adoption of service-oriented architecture and utility computing has led to growth in cloud computing from high-capacity networks to low-cost computers and storage devices. Tech companies are working to make them more affordable and accessible to startups, application developers and enterprises. With the increased adoption of cloud services comes the need for skilled professionals to meet these vast demands. If these numbers are to be believed, cloud computing is here to stay as the companies — both big and small continue to seek qualified professionals with cloud skills such as AWS and more. High In Demand Many job portals and companies have numerous job openings in the cloud space, ranging from cloud architect to cloud network engineer to cloud developer. Reports suggest the number of job openings has increased exponentially compared to the last two years. LinkedIn pegs cloud computing as one of the most in-demand skills for six years running and is only growing. Job portals such as Naukri, Monster, Indeed etc. have listed more than 30,000 current openings in cloud computing across companies. Some of the popular job roles are cloud architect, cloud developer, cloud network engineer, cloud automation engineer, to name a few. Companies such as Microsoft, Cisco, Wipro and TCS are looking for professionals with specific skill sets in cloud computing. In fact, between 2013 and 2017, the job postings in cloud computing grew by 121 per cent. To further attract professionals, the payout for cloud computing roles is also relatively high, $100,000 annually on average. Sought-After Skills Talking about skills, cloud and distributed computing has been one of the most sought -after skills for quite a long time. Some of the other high in-demand skills are those around popular cloud architectures such as AWS, Microsoft, Google Cloud Provider, to name a few. Apart from cloud platforms, professionals should also be adept in knowledge around cloud storage, cloud networking, security, data management and more. Skills such as statistical analysis, distributed computing, middleware configuration, database maintenance etc. are also high in demand. Several industries, including banks, IT companies, and manufacturing companies, are hiring for roles such as cloud computing, cloud administrators, cloud engineers, and more. These positions often demand specific programming languages like Python, Java or Ruby, and experience with different operating systems such as Linux, database querying language like SQL, and more. Starting A Career From expertise in AWS to building and maintaining public, hybrid and private cloud services for a company, a role in cloud computing calls for a combination of skills. There are numerous cloud certification programs that can help you acquire the required skill sets. For instance, Amazon offers Amazon Web Services Certification, Google has Google Cloud Platform CloudAcademy, and Microsoft offers its own Microsoft Azure certifications. Apart from these, several other certifications may be helpful for candidates seeking a cloud computing job. Another critical step is to build a cloud portfolio, which essentially means to create a portfolio using projects from the training experience. Opting for internships is also a good move. Finally, learning new skills and adding new experiences is essential to keeping up with the cloud computing field opportunities. The more hands-on experience you have, the better your chances at gainful employment. All things considered, all the metrics indicate now is the perfect time to start a cloud computing career.
|
The cloud market is one of the few sectors that has seen skyrocketing growth in the wake of the pandemic. Cloud computing is brimming with potential and is projected to grow 18.4% in 2021, with an estimated total revenue of $304.9 billion. The Covid-induced remote working scenarios has resulted in a proportional rise in cloud […]
|
["AI Features"]
|
[]
|
Srishti Deoras
|
2021-01-21T14:00:00
|
2021
| 607
|
["Go", "AWS", "cloud computing", "AI", "R", "distributed computing", "RAG", "Python", "SQL", "Azure"]
|
["AI", "RAG", "cloud computing", "AWS", "Azure", "distributed computing", "Python", "R", "SQL", "Go"]
|
https://analyticsindiamag.com/ai-features/why-the-time-is-ripe-for-starting-a-career-in-cloud/
| 4
| 10
| 3
| false
| false
| false
|
10,160,985
|
V Narayanan to Succeed S Somanath as ISRO Chief
|
The government of India has appointed V Narayanan as the chairman of the Indian Space Research Organisation (ISRO) and secretary of the Department of Space for two years. He will succeed S Somanath, who has retired after a stellar tenure. Narayanan will assume the leadership role on 14 January 2025. He currently serves as the director of the Liquid Propulsion Systems Centre (LPSC), a research and development centre under ISRO. “Increasing India’s presence in space is my top priority,” said Narayanan. He plans to steer ISRO into an era of greater global prominence and increase India’s share in the space economy from 2% to 10%, besides fostering deeper collaborations with international space agencies. Known for his expertise in rocket propulsion and spacecraft technology, Narayanan’s appointment comes at a crucial time for ISRO. With the Gaganyaan mission (an uncrewed flight) on the horizon and increasing global competition in the space sector, his leadership is expected to steer India towards new milestones. He is widely credited for elevating India into an elite group of six nations with advanced cryogenic technology. As the architect of ISRO’s propulsion strategy, he has laid out a roadmap extending to 2037, ensuring sustained innovation in rocket and spacecraft development. Other key milestones include the preparations for the Venus Orbiter Mission (VOM) and the Chandrayaan-4, a stepping stone to India’s space vision 2047, along with the groundwork for India’s first space station, the Bharatiya Antriksh Station (BAS), to be deployed by 2035. Narayanan’s vast experience and visionary approach are poised to further elevate India’s stature in the global space race. The announcement comes with a mix of celebration and nostalgia as the organisation bids farewell to Somanath, regarded as one of ISRO’s finest leaders. It was in his tenure that ISRO witnessed a series of groundbreaking missions, including Chandrayaan-3, Aditya-L1, TV-D1, XPoSat, and the recently launched SpaDeX mission. Somanath’s efforts have laid a strong foundation for India’s success with a major contribution to not just space technology but also India’s National Quantum Mission (NQM), with research into building the first quantum communication ground station in Ladakh. Narayanan’s Rise as an Aerospace Visionary Narayanan hails from Melakattu village in Tamil Nadu’s Kanyakumari district. A brilliant academician, he holds a diploma and an AMIE in mechanical engineering, as well as an MTech in cryogenic engineering from IIT Kharagpur, where he graduated as a silver medallist. He further solidified his expertise with a PhD in aerospace engineering. These are some of his contribution to ISRO’s critical missions: Chandrayaan-2 and Chandrayaan-3: Spearheaded the development of the L110 liquid stage and C25 cryogenic stage of the LVM3, enabling precise spacecraft insertion into the Moon’s orbit. Gaganyaan Programme: Played a key role in human-rating the LVM3 vehicle, developing critical propulsion systems for the mission’s crew and service modules, and overseeing the successful crew escape system test. Chaired the national-level expert committee that analysed the Chandrayaan-2 landing, leading to the improved landing success of Chandrayaan-3. Narayanan has also received numerous awards for his contributions, including a gold medal from the Astronautical Society of India, a National Design Award by the National Design and Research Forum, and a National Aeronautical Prize by the Aeronautical Society of India.
|
As the architect of ISRO’s propulsion strategy, he has laid out a roadmap extending to 2037, ensuring sustained innovation in rocket and spacecraft development.
|
["AI News"]
|
["ISRO"]
|
Sanjana Gupta
|
2025-01-08T22:55:41
|
2025
| 531
|
["Go", "ISRO", "programming_languages:R", "AI", "innovation", "programming_languages:Go", "RAG", "Ray", "GAN", "R"]
|
["AI", "Ray", "RAG", "R", "Go", "GAN", "innovation", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-news-updates/v-narayanan-to-head-isro-as-new-chairman/
| 3
| 9
| 0
| false
| false
| false
|
10,165,842
|
Meta in Talks with TSMC to Launch its First In-House Own Chip
|
Meta has started testing its first in-house chip for training its AI systems, according to Reuters. The move is part of the company’s plan to reduce its reliance on chip suppliers like NVIDIA and lower its AI infrastructure costs. The chip is part of the Meta Training and Inference Accelerator (MTIA) series. If tests go well, Meta plans to increase production and use the chip more widely. The company is working with Taiwan’s Taiwan Semiconductor Manufacturing Company (TSMC) to manufacture it. The report suggests that Meta’s AI-related spending is a major part of its projected $114 billion to $119 billion expenses for 2025, including up to $65 billion in capital expenditures. The new chip is a dedicated AI accelerator, designed specifically for AI tasks. This makes it more efficient than general-purpose GPUs typically used for AI training. Meta has previously struggled with its chip programme. It scrapped an earlier inference chip after poor test results and went back to buying billions of dollars’ worth of NVIDIA GPUs in 2022. However, Meta did deploy a custom chip last year for AI inference on recommendation systems for Facebook and Instagram. Executives revealed that they aim to use in-house chips by 2026 for both training and inference tasks. Last month, reports surfaced that OpenAI was working on developing its own custom AI chips to lessen its dependence on NVIDIA. The company was nearing completion of the design for its first in-house chip, which it plans to send to TSMC for fabrication in the coming months.
|
The company aims to use in-house chips by 2026 for both training and inference tasks, Reuters reported.
|
["AI News"]
|
["AI Chips", "Meta", "tsmc"]
|
Aditi Suresh
|
2025-03-11T18:01:48
|
2025
| 252
|
["Go", "API", "Meta", "OpenAI", "AI", "programming_languages:R", "recommendation systems", "programming_languages:Go", "AI Chips", "Aim", "ai_applications:recommendation systems", "tsmc", "R"]
|
["AI", "OpenAI", "Aim", "recommendation systems", "R", "Go", "API", "programming_languages:R", "programming_languages:Go", "ai_applications:recommendation systems"]
|
https://analyticsindiamag.com/ai-news-updates/meta-in-talks-with-tsmc-to-launch-its-first-in-house-own-chip/
| 2
| 10
| 1
| false
| false
| false
|
10,062,985
|
Analytics India Attrition Study 2022 – Complete Report
|
The Data Science India Attrition Report is an annual study that delves into the attrition trends to provide a comprehensive view of the changing landscape of attrition in India’s data science industry. The report looks at the attrition rates across several categories, including geographies, company sizes, years of experience, industries, and cities. This study can be benefitted recruiters, entrepreneurs, industry policymakers, companies, data science experts to get the overall view about the attrition rate of the data science industry in India. You can access reports from previous years at the link below: 2021 The attrition rate in the Data Science market stands at 28.1% in 2021, a 12.1pp increase compared to 2020. The massive increase in the attrition rate is largely due to higher demand for data science professionals across industries. The increasing adoption of digitisation among various traditional firms, and the explosion of tech startups in India, accelerated the demand for professionals with digital skills. Especially professionals with expertise in data science are in very high demand across industries. They help companies leverage available data to gain customer insights, predict future trends, streamline business processes, and others. In addition, due to their criticality to businesses, professionals in this domain are experiencing massive opportunities to switch their jobs for higher salaries, flexible work arrangements, ESOPs, and other benefits as companies are in the hunt to recruit the best talent in the market. As a result, the attrition rate in the data science market has skyrocketed. Key Highlights The attrition rate for data science/analytics professionals for the year 2021 stood at 28.1%, up from 16.0% in 2020.The attrition rate of large companies with more than 10,000 employees is 30.8% and of companies with less than 50 employees is at 26.7%.Bangalore has the highest attrition rate among metropolitan cities, at 29.7%, followed by Mumbai (28.8%), Kolkata (28.1%), and Delhi/NCR (27.8%).BFSI has the highest attrition at 34.2%, whereas the lowest is for Telecom at 20.5%.High growth firms like Startups and Boutiques have the highest attrition rates of 43.7% and 42.1%, respectively.The attrition rate among key data science roles is the highest for AI/ML Engineers at 34.3%.More than one in three (34.5%) data science professionals with less than two years of experience switched jobs in 2021 — this number reduces to 12.9% for professionals with more than 10 years of experience.Entry-level professionals switched jobs at a rate of 32.3% compared to leadership roles who have an attrition rate of 15.0%. Attrition by Company Size 37.5% The attrition rate is the highest for companies with a size of 5001-10000 at 37.5% 2 The attrition rate for employees in the 5001-10000 and 10000+ company size more than doubled in the last year 1/2 The attrition rate was around 50% for employees with less than two years of experience across all company sizes Overall, the attrition rate of data science professionals was the highest for companies sized 5001-10000 at 37.5%. It was also high for companies with more than 10,000 employees at 30.8%. This number is around 26.7% for companies with less than 50 employees and 28.6% for 51-200. The lowest attrition rate at 25.0% is for mid-sized companies with 201-500 employees. Over the years, the attrition rate for mid-sized companies is seen to be low compared to small enterprises or bigger firms. Smaller companies have lower stability while bigger companies present data science professionals with lesser opportunities to scale in their careers. Companies across all sizes see a higher attrition rate in entry-level to mid-level roles than individuals in management to leadership roles. Even companies sized 201-500 had the second-highest attrition rate (51.1%) for employees with less than two years of experience. The highest Y-o-Y rise (20.5 pp) in attrition rate from 2020 to 2021 was seen in companies with 5001-10000 employees. Conversely, the increase was lowest (2.7pp) for the companies with less than 50 employees. The attrition rate for companies with 501-100 and 1001-5000 employees increased by relatively low margins of 12.6pp and 14.3pp, respectively. Attrition by Cities 29.7% Bangalore has the highest attrition rate among all metropolitan cities at 29.7% 2 The attrition rate in Bangalore and Chennai almost doubled (1.9 times) in 2021 on a Y-o-Y basis 1/4 More than one in four of professionals who switched jobs in India moved to companies in Bangalore Considering that it is called the Silicon Valley of India, Bangalore provides the highest number of job opportunities across sectors for techies to switch their jobs for better career growth. As a result, it has the highest attrition rate at 29.7%. Furthermore, around 27.7% of total data science professionals who switched jobs in 2021 in India moved to companies based in Bangalore. Mumbai stood in second place with an attrition rate of 28.8%. Bangalore and Mumbai also had the highest annual increase in their attrition rates of approximately 13.9pp and 13.0pp. On the other hand, Hyderabad had the lowest attrition rate at 22.7% and also saw the lowest Y-oY growth of just 8.7pp. Sectors with high attrition rate are predominantly based in Mumbai. These are also sectors seeing a high adoption of data-driven technologies leading to more job opportunities in the field. The attrition rate by years of experience is highest for the professionals with less than two years of experience across all the cities, ranging from 31.7% for Hyderabad to 43.7% for Bangalore. On the other hand, the attrition rate is lowest for professionals with more than 10 years of experience, ranging from 13.7% for Chennai to 19.6% for Hyderabad. Cities like Kolkata and Pune see a high attrition rate among younger professionals as they seek opportunities in bigger cities like Bangalore, Delhi, and Mumbai. Attrition by Industries 34.2% BFSI had the highest attrition rate at 34.2% in 2021 21.2pp BFSI had the highest increase in attrition of 21.2pp in 2021 compared to the previous year 1.7 The attrition for BFSI (34.2%) is roughly 1.7 times higher than Telecom industry (20.5%) that has the lowest attrition The highest attrition rate is witnessed by BFSI at 34.2%, followed by Pharma & Healthcare at 32.8%. The key reasons for higher attrition across these industries can be attributed to the high competition of data science talent with the respective domain knowledge. To continuously enhance the quality of their apps and customer experience, Intenet/eCommerce firms are chasing talent from within the industry by offering significantly higher packages and other perks. As a result, the attrition rate in the industry is the third-highest at 31.9%, an increase of 14.4pp on a Y–o-Y basis. The telecom sector had the lowest attrition rate at 20.5%. It also had the second-lowest annual increase of just 11.4pp in its attrition rate. While increasing digitisation has led to many opportunities for data scientists across sectors, traditional industries like Telecom, Oil & Industrial, or Education still tend to have lower attrition rates. Across most industries, professionals with less than two years of experience had the highest attrition rate, ranging from 28.3% for Telecom to 42.1% for BFSI. On the other hand, professionals with more than ten years of experience have the lowest attrition rate, ranging from 8.0% for Oil & Industrial to 18.9% for Pharma & Healthcare. The attrition rate of professionals with 3-5 years of experience in Telecom and Marketing & Advertising (31.4% and 38.5%) is lower than the attrition rate of professionals with less than or equal to two years of experience (28.3% and 29.5%). Attrition by Company Type 43.7% Across company types, the attrition rate is highest in Startups at 43.7% 1.7 The attrition rate in Startups is 1.7 times higher than the rate in Domestic Firms 1/2 Almost one in two (47.7%) of managers in Boutique analytics firms switched jobs in the last year Despite better pay scales and faster career advancement opportunities, the attrition rate among Startups and Boutiques firms in India is the highest. The key reasons for such a high attrition rate include high uncertainty, overburdening employees with work, reluctance in power decentralisation, and culture. As a result, the attrition rate in these firms stands at 43.7% and 42.1%, respectively. High growth firms like startups and boutique AI firms tend to have high attrition rates as data scientists jump ships fast within these firms seeking better opportunities. This is followed by Consulting firms (34.5%) and Captives (30.5%). These firms offer competitive salaries, and professionals are often seen jumping ships within these firms for a better paycheck. The no-growth path, neck to neck competition, and bad work culture are a couple of main reasons for the attrition rate in IT Service firms. However, comparatively lower, the attrition rate for these firms stands at 27.6%. Domestic firms witness a lower attrition rate due to comparatively slower adoption of emerging technologies. Also, the culture in these firms influences the low attrition rates to an extent. Hence, the rate of attrition is around 25.0% within domestic firms. Attrition by Years of Experience & Seniority 1st The attrition rate is the highest across most segments (cities, industries, firms, company size and seniority) for professionals with less than two years of experience 2.3 The attrition rate for professionals with 6-10 years of experience more than doubled (2.3 times) on a Y-o-Y basis 1/8 More than one in eight (12.9%) professionals with over 10 years of experience switched their jobs Professionals with less than two years of experience have the highest attrition rate at 34.5%—5.7pp higher than the previous year. On the other hand, the lowest attrition rate is for professionals with more than ten years of experience—at 12.9%, this is a Y-o-Y increase of 4.9pp. Hence, the gap between the attrition rates of those with more than ten years of experience and those with less than two years of experience grew further to 21.6pp in 2021, up from 20.8pp in 2020. A significant increase in attrition across experience brackets of 3 to 5 and 6 to 10 years indicates a high demand for experienced hands-on data science professionals. The attrition rate for professionals with 3-5 years of experience (at 32.5%) grew by more than 1.8 times over the previous year and for professionals with 6-10 years of experience (at 25.8%) by 2.3 times. 2.2 At 32.3%, entry-level professionals have an attrition rate more than twice (2.2 times) of CXOs who have an attrition rate of 15.0% 1/2 The attrition rate of entry-level positions in startups and consulting firms was higher than average with around one out in two professionals switching jobs in 2021 1/5 Almost one out of five (19.0%) VPs across all industries switched jobs The highest attrition rate is seen for entry-level at 32.3%, whereas CXOs see the lowest attrition at 15.0%. The attrition gap between the CXOs and entry-level professionals will be 17.3pp in 2021. The second highest attrition rate by seniority is seen in senior to managerial roles, at 27.8%. The attrition rate for Directors and VPs is 23.1% and 19.0%, respectively. Attrition by Data Science Roles 1/3 More than one-third of professionals in AI engineering roles – AI/ML Engineer (34.3%), Deep Learning Engineer (33.5%), – are actively switching jobs 28.6% Data scientists have an attrition rate of 28.6%, 3.6pp higher than data analysts 1/2 One in two AI/ML Engineers (48.2%) and NLP Engineers (49.2%) with experience less than two years switched jobs last year The attrition rate is highest for NLP engineers at 36.0%, followed by AI/ML engineers and Deep Learning Engineer at 34.3%, and 33.5% respectively. This attrition increases significantly for analytics professionals with less than two years of experience (48.2% and 41.6% respectively). Data professionals in engineering roles are gaining importance as the issues with scalability of AI/ML models become more pertinent. A lack of good talent in the field also makes these professionals prone to attrition. Data Scientists have an attrition rate of 28.6% compared to 27.3% for Business Intelligence Analysts and 25.0% for Data analysts. These three roles have a higher attrition rate (62.3%, 64.7%, and 64.7% respectively) when it comes to Boutique Analytics firms. High demand for data engineers is also reflected in their attrition rates, which stands at 31.9%. In the BFSI sector, this increases to 38.5%. Download the complete report here
|
The attrition rate for data science/analytics professionals for the year 2021 stood at 28.1%, up from 16.0% in 2020.
|
["AI Features"]
|
["attrition rate"]
|
Rahul Bhorayal
|
2022-03-21T10:00:00
|
2022
| 2,016
|
["data science", "Go", "AI", "ML", "Scala", "RAG", "NLP", "deep learning", "analytics", "attrition rate", "R"]
|
["AI", "ML", "deep learning", "NLP", "data science", "analytics", "RAG", "R", "Go", "Scala"]
|
https://analyticsindiamag.com/ai-features/study-analytics-india-attrition-study-2022/
| 3
| 10
| 5
| false
| false
| false
|
69,748
|
TomTom’s Touch Fitness Tracker launched recently in India
|
With TomTom launching its Touch fitness tracker in India, the country has got the first fitness tracker that combines body composition analysis (BCA) with steps, sleep and all day heart-rate tracking, right from the wrist. In addition to Touch, the company has also announced the launch of TomTom Adventure GPS Outdoor watch and the TomTom Spark 3 GPS multisport fitness watches range in India. TomTom Touch fitness tracker can measure the percentage of body fat and muscle mass with just one push of a button to answer the question- Is what I’m doing, doing anything for me? Body composition gives a user an advanced understanding of their fitness level and how it’s changing over time. Until now, this metric has been available with dedicated scales or expensive technology. The launch of TomTom Touch fitness tracker now makes BCA more accessible to a broader audience. The Touch fitness tracker is designed to be worn 24-7 and includes everything you’d expect from the best fitness trackers available today- from tracking steps & sleep to all day heart-rate and the number of calories burned. It also comes equipped with a sports mode for running, cycling, or hitting the gym. Additionally, it lets you stay connected with smartphone notifications. The TomTom Adventurer is the new GPS Outdoor Watch built to elevate outdoor activities with dedicated sports modes for hiking, trail running, skiing and snowboarding. It is equipped with a 24X7 activity tracker and comes with GPS tracking and route exploration. Whereas TomTom Spark 3 GPS Multisport Fitness watch range comes equipped with Route Exploration that helps user easily explore new trails and find their way back. Additionally, every time a user hits the road, the GPS trace is displayed on the GPS Sports Watch allowing them to find their way back to the start. By combining the integrated compass and GPS information, anyone can run with the confidence of knowing which path to take and how to get back. TomTom Touch will be exclusively available at Amazon.in while TomTom Adventurer and Spark 3 will be available at all leading e-tailers by mid December 2016. Commenting on the launch, Andrew Cooper, Senior Vice President APAC, TomTom International BV said “With TomTom Touch fitness tracker you could see how your body composition changes over time to find out what works for you and understand how fit you are. Building on our years of experience in mapping and navigation, we are proud to also introduce path breaking Route Exploration technology in TomTom Adventurer Outdoor GPS watch and TomTom Spark 3 GPS fitness watches.” “We are confident that the Indian consumers will find their perfect fitness companion with TomTom Sports,” he added. “Obesity is considered the core of many diseases. With Body composition analysis right at your wrist, it will be a GREAT indicator of your overall wellbeing and knowing your body composition changes. Now with this innovation, we’re making technology more accessible to everyone. So will our focus on fitter India”, concluded Hitesh Ahuja, Country Manager, TomTom India. With an aim to emphasize on the fitness segment globally, TomTom’s new division TomTom Sports would house all the fitness products from the company. TomTom Touch would be available exclusively at Amazon.in at Rs. 13,999 while TomTom Adventure & TomTom Spark 3 would be available at all leading e-tailers by mid December 2016 at Rs. 25,999 and Rs. 13,999 respectively.
|
With TomTom launching its Touch fitness tracker in India, the country has got the first fitness tracker that combines body composition analysis (BCA) with steps, sleep and all day heart-rate tracking, right from the wrist. In addition to Touch, the company has also announced the launch of TomTom Adventure GPS Outdoor watch and the TomTom […]
|
["AI News"]
|
["Wearables India"]
|
Srishti Deoras
|
2016-11-28T12:51:21
|
2016
| 561
|
["Go", "programming_languages:R", "AI", "Wearables India", "innovation", "data_tools:Spark", "programming_languages:Go", "Aim", "ViT", "R"]
|
["AI", "Aim", "R", "Go", "ViT", "innovation", "programming_languages:R", "programming_languages:Go", "data_tools:Spark"]
|
https://analyticsindiamag.com/ai-news-updates/tomtoms-touch-fitness-tracker-launched-recently-india/
| 3
| 9
| 0
| false
| true
| false
|
10,111,026
|
JFrog & AWS Collaborate to Expedite Secure Machine Learning
|
JFrog Ltd, a Liquid Software company, has announced a new integration with Amazon SageMaker, enabling companies to build, train, and deploy machine learning models. The company has introduced new versioning capabilities for its ML Model Management solution, integrating model development into DevSecOps workflows. This will increase transparency around each model version, allowing developers, DevOps teams, and data scientists to ensure the correct, secure version of a model is utilised. The company’s integration with Amazon SageMaker ensures all artifacts used by data scientists or used to develop ML applications are saved in JFrog Artifactory. The integration is available for JFrog customers and users. “The combination of Artifactory and Amazon SageMaker creates a single source of truth that indoctrinates DevSecOps best practices to ML model development in the cloud – delivering flexibility, speed, security, and peace of mind – breaking into a new frontier of MLSecOps,”said Kelly Hartman, SVP, Global Channels and Alliances, JFrog. The company will conduct an educational webinar on January 31 in which they will discuss best practices for introducing model use and development into secure software supply chain and development processes. JFrog’s integration with Amazon SageMaker will enable organisations to maintain a single source of truth for data scientists and developers, ensuring all models are accessible, traceable, and tamper-proof. Furthermore, the company said this integration will bring machine learning (ML) closer to software development and production workflows, protects models from deletion or modification, and allows for the development, training, security, deployment of ML models and it can also scan ML licenses for compliance with company policies and regulatory requirements.
|
JFrog Ltd, a Liquid Software company, has announced a new integration with Amazon SageMaker, enabling companies to build, train, and deploy machine learning models.
|
["AI News"]
|
[]
|
Arya Vishwakarma
|
2024-01-18T14:00:22
|
2024
| 261
|
["Amazon SageMaker", "machine learning", "programming_languages:R", "AI", "mlops_tools:SageMaker", "ML", "GAN", "DevOps", "R"]
|
["AI", "machine learning", "ML", "Amazon SageMaker", "R", "DevOps", "GAN", "mlops_tools:SageMaker", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-news-updates/jfrog-aws-collaborate-to-expedite-secure-machine-learning/
| 2
| 9
| 1
| false
| false
| false
|
33,168
|
HTC Global Services To Recruit 3,000 Techies In India
|
HTC Global Services is planning to recruit up to 3,000 techies in India in the coming two years as part of taking its revenue target to $1 billion by the end of 2020. The US-based IT services company has a strong employee base across its development centres in Chennai, Hyderabad, and Bengaluru. The company represents a workforce of nearly 11,000 employees globally including Ciber and CareTech, which was acquired in 2017 and 2014. HTC, which acquired Ciber, an IT consulting firm, for $93 million in 2017, said post integration with the company it has a larger base of clients. The company is looking to increase its share of automation-led services to achieve faster growth. As a mid-sized IT services firm, HTC is competing with larger global firms such as IBM, Capgemini and Accenture to leverage offshore resources in India, as they struggle to find talent that works in newer areas such as digital and cloud. “Customers are looking for better value resulting in an increased demand for talent with domain knowledge. We are finalising hiring numbers, and it will not be less than 2,000-3,000 people by 2020 for India,” said Chary Mudumby, CTO, HTC Global Services to a leading daily. Mudumby further added that they are trying to bring in talent from the educational institutions because the current market demand for emerging technology is more. Emerging Technologies To Boost Company’s Prospects: Companies such as HTC and others have seen a disruption in their business model as clients in the US, and other markets are now demanding more digital technology-led delivery. HTC is eyeing to increase its share of automation-led services to rev up its growth globally. In September 2018, HTC Global Services India and Automation Anywhere, a developer of robotic process automation software based in California, inked a pact to further boost HTC’s RPA (robotic process automation) services, by building BOT-based industry-specific solutions with cognitive abilities. In September 2017, the IT and ITeS firm had acquired Ciber Inc, a US-based global information technology consulting services and outsourcing company, for about $93 million. In 2014, HTC Global acquired CareTech Solutions to expand its footprint in the healthcare industry. According to Madhava Reddy (President and CEO), HTC Global services, the acquisition of Ciber would add over 3,500 employees to its operations globally. The company is also confident of achieving their target of $ 1 billion in revenue by 2020 by the acquisition of CareTech & Ciber. The company is also set to acquire another 5,000 employees by the year 2020. The combined strength of both the companies will help HTC Global to offer customers a comprehensive set of services and fulfil an integral part of material growth across a range of industry sectors. Reddy further added that all these subsequent buyouts will help HTC Global in expanding different parts of US also. The latest acquisitions will also boost the organisation’s ability to deliver exceptional customer service and gain deep expertise in cutting edge technologies. The addition of Cibers workforce will further propel the company’s growth into the marketplace. According to the company’s statement, it is now using automation and predictive analytics to bring down the need for support in application development and services for a client’s business. It is looking at an environment wherein they won’t have to invest much in support as there is a demand for better value and also its customers are looking to invest lesser in support as there is no pricing pressure too. “Customers of HTC and Ciber have experienced the positive benefits of the acquisition and integration and the capabilities have doubled,” said Mudumby, CTO, HTC Global Services to a leading daily.
|
HTC Global Services is planning to recruit up to 3,000 techies in India in the coming two years as part of taking its revenue target to $1 billion by the end of 2020. The US-based IT services company has a strong employee base across its development centres in Chennai, Hyderabad, and Bengaluru. The company represents […]
|
["AI News"]
|
["AI Jobs"]
|
Martin F.R.
|
2019-01-09T10:52:42
|
2019
| 606
|
["AI", "R", "RPA", "AI Jobs", "Git", "RAG", "automation", "analytics", "disruption", "GAN", "predictive analytics"]
|
["AI", "analytics", "RAG", "predictive analytics", "R", "Git", "GAN", "automation", "RPA", "disruption"]
|
https://analyticsindiamag.com/ai-news-updates/htc-global-to-recruit-3k-techies/
| 3
| 10
| 3
| false
| false
| false
|
10,007,457
|
Hands-On Guide To BootStrap Sampling For ML Performance Evaluation
|
In machine learning, after we build the predictive models we often evaluate them using different error metrics like accuracy score, confusion matrix, etc. We also make use of cross-validation techniques to thoroughly understand the ability of the model to generalize well on unseen data. In cross-validation, we divide the data set randomly into different numbers of folds and validate the performance. The scores generated for each fold are averaged and divided by the number of folds through which we analyze the performance of this model on production data. Bootstrapping is also a similar technique that helps analyze the performance of a model. In bootstrapping random data sets are generated then on each data set model is fitted on training and evaluated on the testing data. In this article, we will talk more about Bootstrap Sampling and understand its working. We will first understand how bootstrapping works and then we will implement it on a data set. For this experiment, we will make use of an Iris data set that can be downloaded from Kaggle. What we will learn from this article? How can we evaluate the model performance? What is BootStap Sampling? How to use it? How BootStrap Sampling can be used to evaluate model performance? What is BootStrap Sampling? How does it work to check model performance? While building a predictive model over some data set we always divide the data set into training and testing. The model is trained on the training data and predictions are made using the model on the testing data. These predictions are evaluated using different error metrics like accuracy score, confusion matrix etc. These error metrics help us to validate how good the model has predicted on testing data. But what if we want to get an estimate of how this built model will perform on unseen data or production data. To check this we have different techniques in machine learning called Cross-Validation. Similar to cross-validation we have another technique called Bootstrap Sampling. It is a technique that uses random samples from the data to generate new training and testing data. Similar to cross-validation where we define the number of folds here also we define the number of iterations that decides the total number of data sets we want to generate. Also, apart from the iterations, we define the size that decides the number of data points we want in the new data sets. If we do not define this it will generate the same number of points that are present in the original data. Let us now practically see how bootstrapping works. We will first define the required libraries and data. Then we will generate 10 data sets Refer to the below code for the same. Practical Implementation of BootStrap Sampling from sklearn.utils import resample import NumPy as np data = [1,2,3,4,5,6,7,8,9,10] n_iterations = 10 n_size = int(len(data) * 1) stats = list() for i in range(n_iterations): train = resample(data, n_samples=n_size) test = np.array([x for x in data if x not in train]) print("Train_data ->", train, " " , "Test_data ->", test) As we can see in the above image we have now generated 10 different data sets where training data has data points that get repeated whereas in testing data we have all those data points that are not present in that training data. These generated data sets are different from each other but not completely since this method is known as sampling with replacement. Implementation of Bootstrap Sampling On Iris Data Set Now we will try doing bootstrap sampling on a specific data set and will compute the range to check the model performance on unseen data or the production data. We will first define the libraries and then will load the data. Use the below code for the same. from sklearn.utils import resample from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score from matplotlib import pyplot import numpy as np data = pd.read_csv('Iris.csv') print(data) Now we will define the data points values, n_iterations we want, and the number of sizes of each data set. Then we will create a new data set using bootstrap sampling. We are using the RandomForest classifier for this model. All the predictions made by the model are evaluated using accuracy scores that are kept in scores variable. Refer to the below code for the same. values = data.values n_iterations = 10 n_size = int(len(data) * 1) scores = list() for i in range(n_iterations): train = resample(values, n_samples=n_size) test = np.array([x for x in values if x.tolist() not in train.tolist()]) rfcl = RandomForestClassifier() rfcl.fit(train[:,:-1], train[:,-1]) predictions = model.predict(test[:,:-1]) score = accuracy_score(test[:,-1], predictions) print(score) scores.append(score) Since we got all scores now we will check the range of the accuracy using histogram visualization. We will be using 95% confidence and will be checking the accuracy. Use the below code to the same. pyplot.hist(scores) pyplot.show() alpha = 0.95 p = ((1.0-alpha)/2.0) * 100 lower = max(0.0, np.percentile(scores, p)) p = (alpha+((1.0-alpha)/2.0)) * 100 upper = min(1.0, np.percentile(scores, p)) print('%.1f confidence interval %.1f%% and %.1f%%' % (alpha*100, lower*100, upper*100)) Conclusion Through this article, we explored model performance technique i.e Bootstrap Sampling. We first discussed what it is and how it works. We then implemented the same on the iris data set and generated different data sets, built a random forest model over it, and computed different accuracy scores. Their scores were then used to check the range of accuracy by confidence level of 95%. The only difference that makes cross-validation and bootstrap sampling different is sampling with replacement and rest both works in a similar way.
|
In this article, we will talk more about BootStrap Sampling and understand its working. We will first understand how bootstrapping works and then we will implement it on a data set.
|
["Deep Tech"]
|
["Machine Learning"]
|
Rohit Dwivedi
|
2020-09-16T15:00:25
|
2020
| 926
|
["Go", "NumPy", "machine learning", "programming_languages:R", "AI", "ML", "Machine Learning", "RAG", "Ray", "Matplotlib", "R"]
|
["AI", "machine learning", "ML", "Ray", "NumPy", "Matplotlib", "RAG", "R", "Go", "programming_languages:R"]
|
https://analyticsindiamag.com/deep-tech/hands-on-guide-to-bootstrap-sampling-for-ml-performance-evaluation/
| 3
| 10
| 1
| true
| true
| false
|
10,024,040
|
Roundup Of AI & Supercomputing Products Unveiled At NVIDIA GTC 2021
|
“Softwares will be written by software running on AI computers” Jensen Huang, CEO, NVIDIA Dubbed as the first kitchen keynote, last year NVIDIA CEO announced DLSS, which revolutionised RTX, RTX servers, NVIDIA Jarvis, A100 and DGX A100. This year too, Jensen Huang’s keynote speech acted as the symbolic ribbon cutting to one of the most anticipated events in the tech world. The keynote speech highlighted the following developments: NVIDIA opens gates to its omniverse Image credits: NVIDIA Built on top of RTX, Omniverse is a platform to connect multiple 3D worlds into a shared virtual world. It takes its inspiration from the term “metaverse”. “Pieces of the early-metaverse vision are already here in massive online social games like Fortnite or user-created virtual worlds like Minecraft,” said Huang. “Omniverse is a platform built from the ground up to be physically based. It is fully path traced. Physics is simulated with NVIDIA PhysX, materials are simulated with NVIDIA MDL, and Omniverse is fully integrated with NVIDIA AI,” he added. The platform is cloud-native, multi-GPU scalable and runs on any RTX platform and can be streamed remotely on any device. Foray into data centre markets The biggest takeaway from the keynote is the launch of NVIDIA’s first data centre CPU, Grace. Each CPU delivers 300 SPECint with a total of over 2,400 SPECint rate CPU performance from an eight-GPU DGX. The chipmaker has also announced the first-ever DPU made for AI and accelerated computing—Bluefield 3, which allows organisations to create applications with industry-leading performance and data centre security. According to NVIDIA, Bluefield-3 can achieve 400 GB/s and has 10x processing capability of Bluefield-2. It is optimised for multi-latent, cloud-native environments software-defined, hardware-accelerated networking, storage, security and management services. It delivers equivalent data centre services of up to 300 CPU cores. Huang also announced DGX station, which has four 80GB A100 GPUs having more memory and bandwidth than the original DGX station. Meant for AI research, it also has refrigerated liquid cooling for its Epyc CPU and four A100 GPUs. The station runs at a quiet 37 dB while only utilising up to 1,500 W of power. It delivers up to 2.5 petaFLOPS of floating-point performance. One of the most impressive features is that it transfers 8TB per second. Talking about the performance, Huang said that the Megatron could linearly scale training up to 1 trillion parameters on the DGX SuperPOD with advanced optimisations and parallelisation algorithms. MLOps and Enterprise AI NVIDIA EGX enterprise platform: The platform enables both existing and modern, data-intensive applications to be accelerated and secure on the same infrastructure. It delivers the power of accelerated computing from the data centre to edge with a range of optimised hardware, an easy to deploy application and management software and a vast ecosystem of partners who offer EGX in their products. Huang also gave a glimpse of frameworks that encapsulate the entire workflow to customise AI models. It applies transfer learning to your data to fine-tune models. “No one has all the data – sometimes it’s rare, sometimes they are trade secrets. No model can be trained for every skill. And the specialised ones are the most valuable,” said Huang as he introduced Nvidia TAO. TAO has a federated learning system allowing multiple users to train a shared model while having data privacy. Using TensorRT, it optimises the model for the target GPU system. A Quantum cue NVIDIA’s cuQuantum— an acceleration library designed for simulating quantum circuits, is meant for both tensor network solver and state vector solvers. It is also optimised to scale to large GPU memories, multiple GPUs and multiple DGX nodes. Developers can use it to speed up quantum circuit simulations based on state vector, density matrix, and tensor network methods by order of magnitude. Stay tuned to AIM for more updates from GTC 2021.
|
Highlights of NVIDIA CEO Jensen Huang’s keynote.
|
["AI News"]
|
["high performance computing"]
|
Peter Mathew
|
2021-04-15T18:00:00
|
2021
| 635
|
["federated learning", "Go", "AI", "ML", "MLOps", "Scala", "RAG", "Ray", "Aim", "R", "high performance computing"]
|
["AI", "ML", "MLOps", "Aim", "Ray", "federated learning", "RAG", "R", "Go", "Scala"]
|
https://analyticsindiamag.com/ai-news-updates/roundup-of-ai-supercomputing-products-unveiled-at-nvidia-gtc-2021/
| 4
| 10
| 3
| false
| false
| false
|
926
|
10 Best Fitness Bands Money Can Buy in India: IoT India Magazine Picks for 2016
|
Fitness Bands were among the first wearables to be launched. Much of the credit goes to the early pioneers in this area like Fitbit and Jawbone to bring wearable industry to the forefront of consumer awareness. Yet, looking back to 2016, fitness bands market is up for a shake, not just in India but world over. Recently, Pebble, an early pioneer in fitness trackers, had shut shop. And the market is getting crowded by each passing day. By an estimate, there are over 100 different brands just in this space. We recently came out with our yearly pick of 10 smartwatches to look out for. Here’s our pick of 10 best fitness bands available in India for the year (in alphabetic order). Beurer AS 80 The Activity sensor continuously tracks physical activity and monitors quality of sleep Optimum activity monitoring and sleep analysis with free HealthManager app Transfer: by Bluetooth® low energy technology Activity tracking: number of steps, distance, calorie consumption, activity duration and achievement of daily activity goals Sleep tracking: tracks sleep movements and sleep duration Time display Price: INR 8,200 [separator type=”thin”] Fitbit Charge HR Let your heart be your guide with Charge HR. Monitor heart rate automatically and continuously right on your wrist to accurately track calorie burn, maintain workout intensity, maximize training and optimize health—all without an uncomfortable chest strap. Take control of your goals by using Charge HR to record your workouts and track all-day activity like heart rate, steps, distance, calories burned, stairs climbed and active minutes. With an impressive battery life up to 5 days* and instant access to every stat, you don’t have to look far for motivation to keep moving. Price: INR 12,990 [separator type=”thin”] Garmin Vivosmart HR+ GPS tracks distance and pace while mapping out your run or walk Swim-friendly, sleek band is comfortable to wear all day, and the always-on touchscreen display shows your stats, even in sunlight Measures steps, distance, calories, floors climbed, activity intensity and heart rate on your wrist¹ Receive full suite of smart notifications, which includes email, call, text, social media alerts and more — all from your wrist¹ Auto sync to Garmin Connect™ Mobile to join fitness challenges, review data and receive smart coaching Price: INR 20,990 [separator type=”thin”] GOQii Life Band GOQII Premium is a leading fitness band from in the affordable segment and is the most well known brand from India. The band lets you track and measure your fitness activity and comes with support from personal Fitness experts for 3 months period. Price: INR 3,999 [separator type=”thin”] Intex Fitrist Pulzz Intex FitRist Pulzz Smartband is loaded with features like step count, distance travelled, calories burnt. Intex fitrist pulzz is the second fitness band launched by intex after the intex fitrist that was launched earlier. It has all smart features including heart rate sensing and IPX7(water resistant). Price: INR 1,799 [separator type=”thin”] Jawbone UP 3 UP3™ is designed to give you a complete picture of your heart health. Resting Heart Rate gives you your baseline when you wake up. Passive Heart Rate is measured throughout the day and helps you understand how the habits of your daily life affect your heart. You spend a lot of your life asleep. If you get a good night’s sleep tonight, you’re going to have a better tomorrow. Your body will be refreshed and your mind will be sharp. UP3™ tracks your sleep automatically measuring Deep, Light and REM, and gives tips to help you get a better night’s rest, one night at a time. Price: INR 9,999 [separator type=”thin”] Lycos Life Advanced Interactive Smart Band LYCOS LIFE securely and seamlessly logs in to your favorite apps and websites on your phone, easily unlocking them without the hassle to remember your passwords. With LYCOS LIFE, access to your phone is just a tap away. While activating the Smartband with the app, enable Bluetooth connectivity to allow communication between LYCOS LIFE and handheld devices. The LYCOS LIFE Fitness Band allows users to easily monitor steps, calories burned, speed and heart rate. Life helps users meet goals and its reminders motivate them to get up and get moving. The LYCOS LIFE heart rate monitor and live EKG/ECG readout help users improve their workouts and view their heart activity instantaneously. Price: INR 5,625 [separator type=”thin”] Mi Band 2 Mi Band 2 uses an OLED display so you can see more at a glance. Simply lift your wrist to view time and tap the button for steps and heart rate. The improved pedometer algorithm in Mi Band 2 filters out unnecessary movements. This measures steps taken and exercise more accurately. With a built-in motion sensor, Mi Band 2 knows exactly when you begin your workout. You don’t have to switch modes or tell it before you start. Measure your heart rate to adjust the length and intensity of workouts. Keep calm and work toward your fitness goals! Price: INR 1,999 [separator type=”thin”] Moov Now Moov Now can be strapped to arm, ankle and can be used with variety of apps for tracking specific workouts. The fitness tracker offers advice via virtual personal trainer based on user’s exact position. Moov wants to be more than just fitness wearable. The fitness tracker comes with accelerometer, gyroscope and magnetometer. Price: INR 5,001 [separator type=”thin”] Withings Pulse Ox Whatever your fitness level or style, the Withings Pulse Ox can help you be more active and improve your health. During the day it captures steps, distance walked, elevation climbed and calories burned. At night, it monitors your sleep cycles. And when asked, it measures your heart rate and blood oxygen level. With this data at hand, Pulse Ox empowers you to make informed choices. Price: INR 8,999
|
Fitness Bands were among the first wearables to be launched. Much of the credit goes to the early pioneers in this area like Fitbit and Jawbone to bring wearable industry to the forefront of consumer awareness. Yet, looking back to 2016, fitness bands market is up for a shake, not just in India but world […]
|
["AI Features"]
|
[]
|
Дарья
|
2016-12-13T10:50:51
|
2016
| 952
|
["Go", "programming_languages:R", "AI", "ML", "programming_languages:Go", "ViT", "R"]
|
["AI", "ML", "R", "Go", "ViT", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-features/10-best-fitness-bands-money-can-buy-india-iot-india-magazine-picks-2016/
| 4
| 7
| 2
| false
| true
| false
|
10,166,051
|
Is MCP the New HTTP for AI?
|
What if there was a USB-C port for AI applications—a universal connector for AI systems? Meet Anthropic’s Model Context Protocol (MCP), the newest kid on the block. This open-source protocol allows different AI models to connect with the same tools and data sources, much like standard ports enable different devices to work together. With the curiosity surrounding it, there is a surge in people talking about MCP, its benefits, and how it can make things convenient for developers. Could it be the torchbearer in accelerating the ease of AI integration? What is MCP? Simply put, MCP is a standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. It aims to help frontier models provide more relevant responses. There are three components of the protocol for developers: the MCP specification, local MCP server support, and an open source repository of MCP servers. It follows a client-server architecture, where a host application can connect to multiple servers. Santiago Valdarrama, a computer scientist, explains it as an extra layer when it comes to connecting AI agents to services like Slack, Gmail, or a database, on top of how it works traditionally. He said that MCP reduces complexity, even though it is an added layer. Valdarrama further explains that the extra layer is an MCP server, which makes it possible for developers to replace the AI agent, and still make the integrations work without any extra work. One can use this to add improved functionality to AI coding tools like Windsurf or Cursor. Is it the Same as APIs? In an X thread, Valdarrama explained that MCP is not just another API lookalike. An API exposes its functionality using a set of fixed and predefined endpoints, such as products, orders, or invoices. Whether you want to change the number of parameters for such endpoints, or add new capabilities to an API, the client will also need modifications. However, when dealing with MCP, Valdarrama said, “Let’s say you change the number of parameters required by one of the tools in your server. Contrary to the API world, with MCP, you won’t break any clients using your server. They will adapt dynamically to the changes!” He added, “If you add a new tool, you don’t need to modify the clients either. They will discover the tool automatically and start using it when appropriate!” It is as Boring and Exciting as HTTP Matt Pocock, an AI educator, finds MCP both boring and exciting at the same time—for the same reasons that tech like REST, HTTP, SOAP, and GraphQL got traction. He added that MCP helps reduce friction and makes LLMs cooler. Robert Mao, founder of ArcBlock, a platform to help build decentralised apps, also shared the sentiment. “HTTP is a protocol for browsers, while MCP is a protocol designed for AI,” he wrote on X. Use Cases of MCP There have been numerous developments by companies and individuals leveraging MCP. Perplexity has built an MCP server for Sonar, its AI answer engine, to enable AI assistants to provide real-time web search research capabilities. Composio, an AI startup that helps build AI apps, launched fully managed MCP servers with auth support. This will help integrate several apps like Google Sheets, Zoho, Salesforce, and more with AI coding platforms like Cursor, Windsurf, and Claude Code desktop app easily. A developer integrated Cloudflare’s MCP-worker to 10x his Cursor’s workflow experience. While another one made an MCP server with tools for accessing all models on Replicate, a platform to run and deploy AI models. It was further connected through Claude to generate art. This is so magical 💖 I gave Claude access to @replicate and we generated some art together!How it works: I made a Model Context Protocol server that has tools for accessing all the models on Replicate. Claude puts them together intelligently into a workflow. Link below 👇 pic.twitter.com/FpxsFQpLYC— web weaver (@deepfates) January 30, 2025 Google’s Firebase, a mobile and web app development platform, integrated MCP support to its AI framework, Genkit. Cline, an autonomous coding agent, lets you build and use MCP servers. LangChain also introduced MCP adapters to allow its agents to connect to tools in the MCP ecosystem. Not just limited to MCP’s popularity in terms of usage, the concept encouraged IBM to introduce a similar protocol, Agent Communication Protocol (ACP), which could also be a signal to the protocol solving something useful. At the same time, there have been some mixed reactions. When a user on X asked Andrej Karpathy, founder of Eureka Labs, for his thoughts on MCP, he said, “Please make it stop.” Learn more about the technical aspects of MCP on its documentation website.
|
Anthropic’s Model Context Protocol is a standard for connecting AI assistants to the systems where data lives.
|
["AI Features"]
|
["AI Assistant", "Anthropic"]
|
Ankush Das
|
2025-03-13T19:27:49
|
2025
| 782
|
["Anthropic", "Go", "AI assistants", "API", "AI", "AI Assistant", "RAG", "LangChain", "Aim", "GraphQL", "R"]
|
["AI", "Anthropic", "LangChain", "Aim", "RAG", "AI assistants", "R", "Go", "API", "GraphQL"]
|
https://analyticsindiamag.com/ai-features/is-mcp-the-new-http-for-ai/
| 3
| 10
| 2
| true
| false
| false
|
10,051,607
|
IBM Global Business Services Renamed IBM Consulting
|
IBM has announced that IBM Consulting is the new brand name of its global professional services business, previously known as IBM Global Business Services. “The change to IBM Consulting represents the significant market opportunity that has opened up in front of us, with many organizations in India and globally, seeking people and business partners to help them co-create and co-execute and co-operate their future operations. IBM Consulting is a growth vector for IBM in India and globally as we work with clients as their strategic business partner to apply hybrid cloud and AI technology to achieve their digital transformation goals,” said Sandip Patel, Managing Director, IBM India. Enterprises in every industry are seeking to navigate digital and business transformation with speed and agility. They require a technology consulting services partner who understands this moment’s stakes and will work with them to drive change successfully. Closely aligned with the IBM strategy of hybrid cloud, AI, and the ecosystem’s power, IBM Consulting is poised to deliver rapid business value while acting as a truly collaborative partner. In India, organizations across industries including Parle, BestSeller, State Bank of India, Amul, IOCL, Puravankara and others have embarked on their digital transformation journey with IBM Consulting. Sectors including banking, financial services & insurance, retail and Global Captive Centers (GCCs) are currently the fastest-growing focus areas for IBM Consulting in India. With 140,000+ skilled professionals in 150+ countries, the full breadth of IBM Consulting services includes strategy, experience, business process design and operations, data and analytics, systems integration, application modernization, hybrid cloud management and application operations. As per the company, no other consulting provider offers the innovation and technology advantage IBM Consulting’s clients gain from having access to IBM Research and the team’s close connection with IBM technologies like the Red Hat hybrid cloud platform and IBM artificial intelligence and automation software.
|
As per the company, no other consulting provider offers the innovation and technology advantage IBM Consulting’s clients gain from having access to IBM Research.
|
["AI News"]
|
["ai consulting", "cloud business intelligence solutions", "Cloud Computing", "digital transformation", "IBM", "Machine Learning", "partnership India"]
|
Victor Dey
|
2021-10-14T14:03:07
|
2021
| 306
|
["Go", "ai consulting", "cloud business intelligence solutions", "artificial intelligence", "API", "AI", "digital transformation", "Machine Learning", "Git", "automation", "Cloud Computing", "IBM", "analytics", "GAN", "partnership India", "R"]
|
["AI", "artificial intelligence", "analytics", "R", "Go", "Git", "API", "GAN", "digital transformation", "automation"]
|
https://analyticsindiamag.com/ai-news-updates/ibm-global-business-services-renamed-ibm-consulting/
| 2
| 10
| 4
| false
| false
| false
|
7,737
|
Has the Insurance industry mastered the art of utilising big data?
|
“Big Data” two small words when combined together can have different meanings to different audiences. However, irrespective of the meaning, for the Insurance sector, with no physical products to sell, Big Data has now become a business imperative. Historically, the usage of math coupled with financial theory to analyse and understand the costs of risks have been the backbone of the insurance sector. While the analytics undertaken by actuaries are critically important to an insurance company, the advent of modern technology as well and the data explosion that is currently taking place have expanded and reinvented the core disciplines of analysts. Today’s advanced analytics in insurance have redefined the role, scope and boundaries of actuarial science. Like most companies in the financial services industry, life insurers collect a substantial amount of customer data during the application process but as with many other industries data collection post underwriting is marginal. But this is slowly changing due to new customer channels and touch points. Instead of relying only on internal data sources such as loss histories, which was the norm, Insurers have now begun to analyse the individual. For example if a customer has had a policy with an insurance company for close to 6 years, paid his premium on time, has a good credit score, married with a 6 month old child and falls in the high income category insurance companies through analytics and understanding customer behaviour offer the customer a higher cover for his insurance premium besides also talking to him about the insurance and education plans for his child as well as a family health plan. The insurance sector in India has been engaged in identifying and understanding the application of Big Data initiatives within its businesses. India has a low penetration of 0.7% and 4% in the general and life insurance sectors respectively. With the influx of FDI in India, there is an increased pressure on domestic players to up their game and expand their geographical reach. As internet access through the mobile technology continues to evolve, consumers are carrying out banking and retail transactions online and via mobile devices. This emphasizes the need gap for well-managed data services that reduce turnaround time and enhance business efficiencies. The room for growth is tremendous and the scope for Big Data in insurance has never been more pronounced. According to Nasscom, India’s Big Data outsourcing opportunity is projected to reach around USD 1.2 billion by the end of the year 2015. Having said that, Big Data initiatives are at their infancy in India and organizations are yet to explore the full potential of Big Data and the value it brings to them. Most Indian organizations are still grappling with the amount of data they generate. The early adopters of Big Data are expected to emerge from sectors such as BFSI, retail, hospitality and media. The challenge faced by most sectors is to analyze the data collected and identify new opportunities to store them securely and affordably. In the financial services sector and more specifically in Insurance, the benefits of the Big data initiatives are likely to translate into a better customer experience, operational efficiency while reducing fraud and thus losses. Economic customer acquisition and persistency are the big challenges and Big Data initiatives will definitely help in managing these problems in a data driven manner. Leveraging developing data sources like the Credit Information Companies in conjunction with internal data initiatives will help the insurers’ structure data better to make it actionable. Thanks to the Reserve Bank of India, the regulatory structure in India on data sharing in the banking industry is fairly evolved. Specialized organizations called “Credit Information Companies” are licensed by the Reserve bank of India to collect and disseminate banking information. While the Insurance companies can access the CICs, they are currently not required to contribute data to them. However, the insurance sector is also on the path to build a similar data repository that can help the industry. As in other countries, we expect that over a period of time these data repositories will also converge and begin to talk to each other. An insurer can make the right decision wherever it is needed, including underwriting, application and claim fraud, retention, cross-selling, claims assessment and collections using the appropriate products and services and thus take their first step into the big data domain. India Inc. can leverage Big Data Analytics to innovate and transform internally as well as through products and experiences. Moving towards data-driven and evidence-based business models allows an enterprise to understand its customer and empower its workforce. Organizations are now realizing the value of Big Data Analytics in mining customer preferences and propensity as well as in devising technologies that deliver actionable strategies to the front end. As far as pricing goes, Big data will help in optimizing the risk segmentation leading to better pricing structure. Better insights into customer segments and preferences can also help in developing innovative and customized products and services while also helping insurers channelize their resources in a more effective and organized structure. Large capital spends is not a requirement to derive benefits from organizational data resources. Often simple data marts focused on specific use cases can drive value in the organizations and may be the appropriate starting points to get the business teams in readiness to accept increasing levels of complexity of analytics solutions. The effective use of data analytics can aid in enhanced customer lifecycle management in terms of heightened customer intelligence, new sourcing mechanisms, better customer monitoring techniques and of course can help prevent fraudulent claims. Data analytics can be used to understand consumer behavior, for segmentation and to develop the right offer for the right customer. We live in an increasingly connected world that would see the Internet of Things have a huge impact on the way insurers would engage with their customers be it wearables that could produce data on the health of the wearer or a telematics connected car that communicates the, maintenance and vehicle usage that ultimately determines the premium an automobile owner pays. The days of one size fits all is nearing an end, data analytics is forcing insurers to go the “Made to Measure” way.
|
“Big Data” two small words when combined together can have different meanings to different audiences. However, irrespective of the meaning, for the Insurance sector, with no physical products to sell, Big Data has now become a business imperative. Historically, the usage of math coupled with financial theory to analyse and understand the costs of risks […]
|
["IT Services"]
|
["most engaged countries in data science"]
|
Mohan Jayaraman
|
2015-07-31T15:00:11
|
2015
| 1,035
|
["big data", "Go", "API", "AI", "data-driven", "most engaged countries in data science", "RAG", "Aim", "analytics", "GAN", "R"]
|
["AI", "analytics", "Aim", "RAG", "R", "Go", "API", "big data", "GAN", "data-driven"]
|
https://analyticsindiamag.com/it-services/has-the-insurance-industry-mastered-the-art-of-utilising-big-data/
| 2
| 10
| 3
| false
| true
| false
|
10,077,050
|
How ITC is Using Machine Learning
|
Today, ITC is one of the largest FMCG players in the country, and its products reach every second household in India. The company is home to more than 60 brands across FMCG categories and reaches about 2 million retail outlets directly. Every day, its brands compete with 100+ companies in the marketplace. “We have a huge portfolio. From sutta to atta, we make everything,” quipped the head of sales analytics at ITC, Swamy Saran Atul, speaking at Cypher 2022. He explained, at length, how the company uses machine learning and analytics to build personalised recommendations to solve the most complex challenges in sales and distribution, emphasising how models work around the most crucial human in the loop – the salesman. Awakening the inner salesman in everyone, Atul said they have 30,000+ salesmen trying to sell 1,500+ SKUs and reach about 2 million outlets across India. “It is very difficult to remember what products to sell to each outlet, what is the propensity of any outlet, etc,” he added. Going beyond the principles of segmentation, ITC believes in hyper-personalization, treating each outlet as a segment. Enter SOTA AI Models “This is a classic recommendation problem,” said Atul, saying that they do all the analytics and figure out what products to sell to each outlet or store. To begin with, for each outlet, they create an outlet profile of the data. This is done using both external and internal data. The internal data include outlet profile, outlet transaction history, product characteristics, consumer demand sensors, etc. On the other hand, the external data include neighbourhood outlets, nearby points of interest, footfalls, demographics, etc. “Thousands of variables go into every outlet profiling,” Atul pointed out. Another interesting question is, where is ITC procuring its external data from? “Most FMCG companies will say that this is our IP. Likewise, it’s our salesman who has been going to these small outlets for years and years. That is how we enrich our understanding of outlets,” he added. However, he said that data, like demographics, rely on publicly procured data. “So, there are multiple data vendors who provide that kind of information,” he added. (Source: Analytics India Magazine, Cypher 2022) Further, Atul pointed out the technical slide (as shown above) and said they use state-of-the-art MLOps and DataOps, a completely automated process. “All of this is done, and we get some 85 per cent accuracy achieved – very happy,” he added wittily, “Nothing is manual. It is perfect, with no errors. Beautiful!” There is no such thing as a perfect ML implementation “We experiment a lot, make mistakes, and improve,” said Atul, explaining the aftermath of what happens after the model has been deployed. He said that there is a lot of human thinking, human intelligence, inputs, feedback, and improvements that go into every model and that humans cannot be ruled out. He calls this human-in-the-loop – a salesman. “Whatever industry salesmen are in, they are motivated by their targets,” quipped Atul. Further describing the typical Indian salesman that works for FMCG companies, he said that the salesmen may not be highly qualified; they earn somewhere around INR 20,000 a month, with a significant variable component. That variable is largely influenced by the targets set by the FMCG company. “Now, if we give a bad recommendation… the salesman will not be able to convert that recommendation. If he does not convert, he will keep losing his income. So for him, every recommendation matters,” he added, comparing the recommendation systems in parallel industries, including Amazon, Flipkart, Netflix, etc. “For instance, if the recommendation fails, a Netflix or Amazon doesn’t lose much, but if it goes wrong for an FMCG salesman, he may lose out on INR 500-1000 of his variable income, which is huge for him,” said Atul. He said our recommendations are very different from food aggregators and grocery delivery players like Swiggy, Zomato, or BigBasket because they have to move from point A to point B. But, ITC salesmen have to do multiple things. This includes not only going to the retailer but also taking orders with new products, collecting money, and ensuring ITC products are displayed nicely, alongside maintaining a good relationship with the retailers. “That’s the life we are trying to simplify,” said Atul. It now works flawlessly Atul said that they identified multiple problems with time, including poorly labelled data, incomprehensible recommendations, and others. To solve these challenges, ITC has been investing in increasing the explainability of the model, simplifying UX training while continuously training users, and streamlining planning activities to execution, ensuring all the stakeholders are benefited from their recommendations across the life cycle. The stakeholders include not only salesmen but also marketers, manufacturers, etc. He said, thanks to ‘the human in the loop’, they have made a very nice model. Interestingly, the recommendation model was built in one month, and the data pipeline was done in two months. “So, within two months, everything was set up,” he recalled, saying that since then, they have been scaling this up across the country, improving iteratively. The models look very different from what they looked like two years ago. “The process looks extremely different from what it used to look like two years ago, but that is where the money is. That is where the fun of ML implementation is,” said Atul. Finally, he said that the model is only as good as the data it consumes, the recommendation is only as good as the delivery, and the process is only as good as the discipline we instill.
|
“The model is only as good as the data it consumes, the recommendation is only as good as the delivery, and the process is only as good as the discipline we instill,” says ITC head of sales analytics.
|
["AI Features"]
|
["Machine Learning Latest", "recommendation systems"]
|
Amit Naik
|
2022-10-12T10:00:00
|
2022
| 923
|
["Go", "machine learning", "AI", "ML", "MLOps", "Machine Learning Latest", "recommendation systems", "data pipeline", "ViT", "analytics", "R"]
|
["AI", "machine learning", "ML", "analytics", "MLOps", "recommendation systems", "R", "Go", "data pipeline", "ViT"]
|
https://analyticsindiamag.com/ai-features/how-itc-is-using-machine-learning/
| 3
| 10
| 2
| false
| false
| false
|
10,008,201
|
Complete Tutorial On Twint: Twitter Scraping Without Twitter’s API
|
Web Scraping allows us to download data from different websites over the internet to our local system. It is data mining from different online portals using Hypertext Transfer Protocols and uses this data according to our requirements. Many companies use this for data harvesting and for creating search engine bots. Python has a large variety of packages/modules that can help in the process of web scraping like beautiful soup, selenium. Several libraries are there which can automate the process of web scraping like Autoscraper. All these libraries use different APIs through which we can scrape data and store it into a data frame in our local machine. Twint is an open-source python library that is used for twitter scraping i.e we can use twint in order to extract data from twitter and that too without using the twitter API. There are certain features of twint which makes it more useable and unique from other twitter scraping API, namely: Twitter API has a limit of fetching only 3200(last) tweets while twint has no limit of downloading tweets, it can download almost all the tweets.Easy to use and very fast.No initial Sign-in or Sign-up required for fetching data. Twint can be used to scrape tweets using different parameters like hashtags, usernames, topics, etc. It can even extract information like phone number and email id’s from the tweets. In this article, we will explore twint and see what different functionalities it offers for scraping data from twitter. Implementation: We will start by installing twint using pip install twint. Importing required libraries We will be scraping data from twitter using twint so we will import twint other than this we need to import net_asyncio which will handle all the notebook and runtime errors. Also, we will initiate the net_syncio in this step only. import twint import nest_asyncio net_asyncio.apply() Configuring Twint We need to scrape data from twitter using twint before that we need to configure the twint object and call it whenever required. t = twint.Config() Now let us start scraping different types of data from twitter. Scraping Data Followers on Twitter Here, we will see how we can download the names of the followers of a particular user by using their username. Here I am using my own twitter username. t.Username = "Himansh70809561" twint.run.Followers(t) Here you can see a list of my followers on twitter because I used my username, similarly, you can use the different usernames of different users and download the follower’s name. Storing info to Dataframe We can also store the information into a data frame. Let us see how to store the follower’s details in a data frame. t.Limit = 30 t.Username = 'Analyticsindiam' t.Pandas = True twint.run.Followers(t) follow_df = twint.storage.panda.User_df Here we saw that the top 30 followers are stored in a data frame. We can set the number of followers to the desired number. Extracting tweets with a particular word Here we will try and extract all tweets which have a particular word in them which we define. t.Search = "analytics" t.Store_object = True t.Limit = 10 twint.run.Search(t) tlist = t.search_tweet_list print(tlist) The output contains tweet from different users with their usernames and tweet along with the date when a tweet is published. Tweets of a particular User We can also extract tweets from different users by entering their username as the parameter. t.Search = "from:@Analyticsindiam" t.Store_object = True t.Limit = 10 twint.run.Search(t) tlist = t.search_tweet_list Here we can see some recent tweets from Analytics India Magazine along with their username and date on which they were published. These are some of the ways with which we can extract data or scrape data from twitter using twint. Twint contributors are actively contributing to making it better and better day by day. Conclusion: In this article, we saw how we can use twint to extract data from twitter. We started with scraping the followers a person has on twitter further we saw how we can store them in a data frame. We also saw how to extract tweets with a particular string or tweets from a particular user. Twint is easy to easy and is blazingly fast with frequent updates.
|
Twint is an open-source python library that is used for twitter scraping i.e we can use twint in order to extract data from twitter and that too without using the twitter API.
|
["Deep Tech"]
|
["Data Mining", "data mining tools", "extract data from twitter", "Twitter (X)", "Web Scraping", "Web Scraping Tools", "Web Scraping With Python"]
|
Himanshu Sharma
|
2020-09-23T18:00:00
|
2020
| 692
|
["API", "Web Scraping Tools", "TPU", "programming_languages:R", "AI", "Web Scraping", "Data Mining", "R", "RAG", "Python", "extract data from twitter", "data mining tools", "analytics", "programming_languages:Python", "Twitter (X)", "Web Scraping With Python", "Pandas"]
|
["AI", "analytics", "Pandas", "RAG", "TPU", "Python", "R", "API", "programming_languages:Python", "programming_languages:R"]
|
https://analyticsindiamag.com/deep-tech/complete-tutorial-on-twint-twitter-scraping-without-twitters-api/
| 2
| 10
| 0
| true
| true
| false
|
15,885
|
Hadoop vs HPCC – How these big data giants stack up against each other?
|
Hadoop vs HPCC Systems Hadoop is more commonly associated with the term “Big Data.” The underlying technology makes massive amounts of data accessible and is based on the open source Apache Hadoop project. However, there is a competitor to Hadoop, called High Performance Computing Cluster (HPCC). The technique is more mature and enterprise-ready. LexisNexis developed HPCC Systems as an open source initiative. The initiative has helped Lexis Nexis to power its $1.5 billion data-as-a-service (DaaS) business. HPCC Systems are open-sourced under the Apache 2.0 license, like their Hadoop counterparts. Moreover, both make use of commodity hardware and local storage interconnected through IP networks. This allows parallel data processing and/or querying across the architectures. However, this is where the similarities end. Are HPCC Systems more efficient than Hadoop? High Performance Computing Cluster HPCC Systems has been in production use for over 12 years now. But, the open source version has only been available since the last couple of years. These systems offer more mature enterprise-ready package. HPCC Systems essentially use a higher-level programming language called enterprise control language (ECL), that is based on C++. This is opposed to Hadoop’s Java-based approach. Moreover, Java relies on a Java virtual machine (JVM) to execute. Key advantages of HPCC Systems: Ease of use Backup and recovery of production Speed is enhanced as C++ runs natively on top of the operating system Possesses more mission-critical functionality Moreover, HPCC Systems prove more advantageous as they have layers – security, recovery, audit, and compliance, which Hadoop lacks. With HPCC Systems, if data is lost during a search, it’s not gone forever. In fact, it can be easily recovered with traditional data warehouses. So far, no reliable backup solution exists for Hadoop cluster. Hadoop stores three copies of data which is not exactly similar to having backup. Besides, it doesn’t provide archiving or point-in-time recovery. However, Hadoop hasn’t been really designed to be used in production environment. It serves its best purpose of analyzing massive amounts of data to find correlations between hard-to-connect data points. The best use case for Hadoop at the moment is to serve as a large-scale staging area. It acts as a platform for adding structure to large volumes of multi-unstructured data. This facilitates analysis of the data by relational-style database technology. How beneficial is it to integrate an Enterprise Control Language? Apache Hive ECL is very much similar to high-level query languages such as SQL. Users can tell the computer what they want rather than instructing it how it’s done, implying ECL is declarative in nature, somewhat like SQL. To put it to perspective, a Microsoft Excel expert should generally have no major trouble picking up ECL. HPCC Systems has worked with analytics provider Pentaho and its open source Kettle project, to simplify how queries are developed. This allows users to create ECL queries in a drag and drop interface. This feature is not available with Hadoop’s Pig or Hive query languages. Besides being primitive, these languages are also hard to program and maintain. Moreover, it becomes a really difficult task to extend and reuse the code. Moreover, HPCC Systems are designed to answer real-world questions. Hadoop, on the other end needs users to put together separate queries for each variable they seek, which makes the process more complex, unlike in case of HPCC Systems. How does Hadoop weigh in front of HPCC Systems? The inventor of Hadoop, Doug Cutting Hadoop was originally part of the Nutch project put together by Google. The organization aimed at parsing and analyzing log files. Until 2006, it wasn’t even Google’s own Apache project. However, since then, Hadoop has come to become the de facto standard for big data projects, and has a user base much larger than that of HPCC Systems. This is not all, Hadoop is supported by an open source community in the millions and an entire ecosystem of start-ups. In other words, Hadoop reflects the capability to cater to a wider range of end users than the data management systems that have come before. Scalability, flexibility, and cost-effectiveness are the three key advantages of leveraging Hadoop. Hadoop’s cost-effectiveness is what truly drives its popularity among users. Moreover, with HPCC Systems, much of the required functionalities are available outside the box, however, Hadoop runs on commodity hardware, where someone or a third-party provider has to be hired for putting everything together. Cloudera serves as the best-known and most successful example of Hadoop startups. The organization furnishes turnkey Hadoop implementations to companies as diverse as eBay, Chevron, and Nokia. Last Words Hadoop The rapid explosion of data is what’s fueling this transformation. Data is growing at a tremendous scale and speed, as more and more things get hooked up to computers; whether it’s your house, your TV, your cell phone, or the flight you took. This demands different architecture and different way of working with the data. Thus, HPCC Systems might be the need of the hour if enterprises are looking for a robust solution that provides enterprise-grade functionality. However, Hadoop should be the best alternative, if enterprises just intend to get a feel for what big data is all about. Moreover, Hadoop provides a huge open-source ecosystem of developers working on it daily, besides a host of third-party organizations who want to make great use of the opportunity that big data presents.
|
Hadoop is more commonly associated with the term “Big Data.” The underlying technology makes massive amounts of data accessible and is based on the open source Apache Hadoop project. However, there is a competitor to Hadoop, called High Performance Computing Cluster (HPCC). The technique is more mature and enterprise-ready. LexisNexis developed HPCC Systems as an […]
|
["Deep Tech"]
|
["big data india", "DaaS", "developers India", "full stack project ideas", "hadoop world", "hpc data management system", "security India"]
|
Amit Paul Chowdhury
|
2017-06-27T05:37:01
|
2017
| 888
|
["Go", "security India", "big data india", "hadoop world", "hpc data management system", "AI", "developers India", "Scala", "RAG", "Aim", "C++", "analytics", "SQL", "DaaS", "R", "Java", "full stack project ideas"]
|
["AI", "analytics", "Aim", "RAG", "R", "SQL", "Go", "Java", "Scala", "C++"]
|
https://analyticsindiamag.com/deep-tech/hadoop-vs-hpcc-how-these-big-data-giants-stack-up-against-each-other/
| 3
| 10
| 4
| true
| true
| false
|
26,506
|
Why Are The US And China Rushing To Standardise AI?
|
The blatant antagonism between the US, China and Canada has drawn criticism from several think tanks who have prophesied another “tech cold war”. In a report by Real Clear Defence portal, known for its analysis of national security and strategy, the artificial intelligence arms race has had a polarising effect with countries firming up their military strategies. According to the defence portal, the latest US National Security and Defence Strategies have a new focus on great power rivalry, characterising China as a strategic competitor. Also, the latest news reports confirm that the Pentagon has updated its cyber strategy which will be announced soon. There is also an AI plan in the pipeline, to strengthen the country’s national security. Reportedly, the US military is also funnelling additional resources into technology. US’s attempts are in line to counter China’s hegemony in the Indo-Pacific region, the reports hints. As countries rush to outrival each other for AI dominance, the race has sparked multifaceted challenges in terms of ethical policies, AI standards and the power to shape the world according to the respective country’s interest. Meanwhile, China leads the world in terms of investment, research and R&D. A McKinsey report indicates the country has developed strategic awareness among tech leaders and startup communities. The Chinese school curriculum has been revamped to develop technical know-how among students and the Government has integrated AI tools in the public sector to enhance human welfare by improving health care, the environment, security, and education. By prioritising the AI strategy and building a robust data ecosystem, the country spurred the adoption of AI in traditional industries. It has also established a standard for privacy for Chinese citizens and the business community. The country put in place the building blocks for success such as a well-trained data science talent and as given the business community the potential to generate more data. Does The World Need AI Standardisation? In the current political climate when AI rivalries are heating up, countries are rushing to put AI standards. Researchers at the New America think tank analysed how the Chinese moved to assert a role in writing AI standards. The analysis by Jeffrey Ding, Paul Triolo, and Samm Sacks listed down the top three reasons behind China’s attempt in laying down AI standardisation and what it would mean for the rest of the world. The think tank’s analysis notes that the Chinese government views standards as playing a significant role in the country’s aspirations for AI leadership. This is the country’s attempt to generate more value out of the AI technologies by allowing interoperability of systems. It will also strengthen the country’s tech companies’ commercial competitiveness on a global scale On the back of massive investments and increased focus on R&D is China’s urge to blunt US dominance and establish itself as a global tech player. The think tank notes, this is one of AIDP’s near-term goals emphasising how “China should have established initial AI technology standards, service systems, and industrial ecological system chains. It should have cultivated a number of the world’s leading AI backbone enterprises.” In 2018, the Chinese government took two important actions to beef up domestic and international AI-related standards, publishing a white paper on the topic in January, and hosting an important international meeting in Beijing in April. The groups wanted to develop standards for AI terminology, reference architecture, algorithm models, computational methods, security, trustworthiness, use cases, and application analysis. Another important event that took place was US-China AI Tech Summit in Silicon Valley earlier in July where the two AI-powered countries sought to develop standards for autonomous technology and medical AI. According to a report in China Daily, the easiest first targets for standards development are autonomous vehicles and medical AI applications, because they are directly related to people’s daily lives, she said. The Institute of Electrical and Electronics Engineers (IEEE) announced new standards for ethics in AI safety last year. China responded by setting up China AI Industry Alliance (AIIA), which was established last year and is striving hard to catch up with the IEEE in addressing social problems. Meanwhile, the US on its part has stepped up efforts to square off Russia. Defence reports claim that US Defence Secretary James Mattis has repeatedly emphasised the positive impact AI can have on defence operations and AI is seen as one of the ways for major strategic modernisation for supporting the National Defence strategy.
|
The blatant antagonism between the US, China and Canada has drawn criticism from several think tanks who have prophesied another “tech cold war”. In a report by Real Clear Defence portal, known for its analysis of national security and strategy, the artificial intelligence arms race has had a polarising effect with countries firming up their […]
|
["AI Features"]
|
["AI in china"]
|
Richa Bhatia
|
2018-07-18T10:58:06
|
2018
| 736
|
["data science", "Go", "artificial intelligence", "AI", "medical AI", "Aim", "AI safety", "Rust", "AI in china", "R", "startup"]
|
["AI", "artificial intelligence", "data science", "Aim", "medical AI", "R", "Go", "Rust", "AI safety", "startup"]
|
https://analyticsindiamag.com/ai-features/why-are-the-us-and-china-rushing-to-standardise-ai/
| 3
| 10
| 5
| false
| false
| true
|
10,049,804
|
How Deep Learning is Changing Corporate Finance Around the World
|
Global corporations face a difficult task in predicting credit ratings. In this case, Massaron employs deep learning techniques to forecast credit ratings for global corporations. Luca Massaron, Senior Data Scientist, Kaggle Master and Google Developer Expert on ML, spoke at the Deep Learning Devcon 2021, organised by The Association of Data Scientists, earlier this month. In his talk on “Deep Learning for Credit Rating”, Massaron covered topics, such as how deep learning techniques may be used to forecast credit ratings for worldwide business organisations; obligations are compared to the most widely used traditional machine-learning approaches, such as linear models and tree-based classifiers. (Source: Luca Massaron | DLDC 2021) According to Massaron, in an article titled “An artificial intelligence approach to shadow rating,” the study’s objective was to demonstrate that neural networks may be a more effective technique for calibrating and predicting ratings than other modelling approaches currently used in the banking industry. Risk factors in Credit Ratings Luca proceeded further by explaining the significance of credit ratings. International rating agencies like Standard & Poor’s (S&P), Moody’s and Fitch give credit ratings, which are alphanumeric indications of credit risk. Different ratings correspond to different expected default probabilities. He continued by stating that while rating companies claim to employ both quantitative and qualitative data in determining a rating, they do not disclose the methodology used for assigning. He further explained the risk factors involved in credit ratings. The discussion went on with the data and the overall workflow, moving over to the “balance sheet index” and “balance sheet ratios” in detail. (Source: Luca Massaron | DLDC 2021) Then followed the macroeconomic factors, including country-specific metrics and Eurozone indicators. Finally, the category embedding model of an artificial neural network was discussed. Massaron continued by discussing the drivers and workflow. They used a sample of 2469 annual corporate credit rating observations to train the model and evaluate its performance. Because their analysis focused on corporate debt, they omitted financial institutions and sovereign debt and examined various sectors. The proposed model architecture comprises a deep neural network with multiple layers of densely coupled artificial neurons when discussing the model. Similarly, the word embedding technique was used, which involves modelling words and documents. Prediction using SHAP In addition, he mentioned the original kappa coefficient when discussing the results, which is a chance-adjusted index of agreement between the algorithm and the ground truth. He went on to say that the SHAP (SHapley Addictive exPlanation) approach is utilised to allocate each feature to a specific prediction. Later, when discussing the improvements, the speaker mentioned a complex non-linear model against a few cases that do not span the entire state space. He explained the deep double descent, which occurs in CNNs, ResNets, and transformers while discussing regularisation. Similarly, while considering data augmentation, he stated that one method to overcome the problem of limited data is to attempt to expand it. Mixup is an augmentation that appears to focus on dissimilarity rather than similarity. Additionally, he stated that learning from a few cases results in an irregular and discontinuous decision border in a neural network. It becomes more smooth as a result of the mixup augmentation. The results of this work also demonstrated an adequate accuracy over different rating classes when applying categorical embeddings to artificial neural network (ANN) architectures.
|
According to Luca Massaron, deep learning can be employed to anticipate the credit ratings of global business organisations.
|
["AI Features"]
|
["AI (Artificial Intelligence)", "artificial neurons", "CNNs", "deep double descent", "Deep Learning", "deep neural network", "Transformers"]
|
Dr. Nivash Jeevanandam
|
2021-09-27T16:00:00
|
2021
| 550
|
["Go", "artificial neurons", "artificial intelligence", "AI", "neural network", "deep neural network", "ML", "Transformers", "deep double descent", "Aim", "deep learning", "GAN", "Deep Learning", "CNNs", "R", "AI (Artificial Intelligence)"]
|
["AI", "artificial intelligence", "ML", "deep learning", "neural network", "Aim", "Transformers", "R", "Go", "GAN"]
|
https://analyticsindiamag.com/ai-features/deep-learning/
| 3
| 10
| 2
| false
| false
| true
|
10,118,894
|
Cognizant Teams Up with Microsoft to Integrate Generative AI for Employees
|
Judson Althoff, EVP & Chief Commercial Officer of Microsoft has announced its partnership with Cognizant to bring Microsoft’s generative AI capabilities to Cognizant’s employees and millions of users across industries. Althoff said, by leveraging Microsoft’s Copilot capabilities, Cognizant will help organisations transform business operations, enhance employee experiences, and deliver new value for their customers. “We will work with Cognizant to build and deliver industry and business-specific solutions built on Microsoft Copilot Studio to help customers create and customise their own copilots. I am excited about this next step in our journey and for the opportunities ahead to drive pragmatic innovation together,” said Althoff. Cognizant acquired 25,000 Microsoft 365 Copilot seats for its associates and 500 Sales Copilot seats and 500 Services Copilot seats. They aim to boost productivity, streamline workflows, and improve customer experiences. Also, Cognizant plans to roll out Microsoft 365 Copilot to a million users among their global 2000 clients and across 11 industries. Furthermore, 35,000 Cognizant developers have been trained on Github Copilot through their Synapse skilling program, with another 40,000 developers set to undergo training. A few months ago, Cognizant and Microsoft teamed up to introduce the Innovation Assistant, a generative AI-powered tool developed on Microsoft Azure OpenAI Service. Ravi Kumar highlighted the partnership with Microsoft, stating, “In collaboration with Microsoft, we are leveraging generative AI to transform our innovation strategy, aiming to keep ourselves and our clients ahead in a swiftly changing business landscape.” In 2023, Microsoft announced its commencement of the Copilot rollout for consumers in September and for enterprises in November of the same year, heralding the AI-powered assistant’s potential to revolutionize user interactions with technology. Subsequently, reports surfaced in November indicating that Microsoft 365 Copilot could potentially generate over $10 billion in annualized revenue for the tech giant by 2026.
|
Cognizant acquired 25,000 Microsoft 365 Copilot seats for its associates and 500 Sales Copilot seats and 500 Services Copilot seats.
|
["AI News"]
|
["Cognizant", "Generative AI", "Microsoft"]
|
Gopika Raj
|
2024-04-23T15:46:57
|
2024
| 298
|
["copilots", "Go", "OpenAI", "AI", "R", "ML", "Cognizant", "RAG", "Aim", "generative AI", "Generative AI", "Azure", "Microsoft"]
|
["AI", "ML", "generative AI", "OpenAI", "Aim", "RAG", "copilots", "Azure", "R", "Go"]
|
https://analyticsindiamag.com/ai-news-updates/cognizant-teams-up-with-microsoft-to-integrate-generative-ai-for-employees/
| 3
| 10
| 4
| false
| false
| false
|
10,141,361
|
SQL Server Now Natively Integrates into Microsoft Fabric
|
In an exciting announcement, Microsoft revealed that SQL Server is now integrated into Microsoft Fabric databases, combining operational and analytical databases in one unified platform. “This eliminates the need to shuffle data between systems, simplifying enterprise data management,” said Microsoft chief Satya Nadella. The adoption of Microsoft Fabric includes 70% of the Fortune 500, reinforcing its dominance in enterprise data management. “It’s a unified platform for all operational and analytical data,” added Nadella. As mentioned at the event, Microsoft is investing in data innovation across this platform. SQL Server 2025, now in preview, introduces itself as an enterprise AI-ready database designed to operate seamlessly from ground to cloud. This latest version integrates AI capabilities, simplifying AI development and pattern recognition with secure performance and intuitive vector capabilities, all while using the familiar T-SQL language. Source: Microsoft Additionally, building on its reputation for best-in-class security and performance, the new release amplifies automation for threat management and addresses potential vulnerabilities through Microsoft’s advanced management capabilities while fully integrated with Azure, offering upgraded cloud security and agility. Earlier this year at Microsoft Build, the tech giant launched real-time intelligence in its AI-powered analytics platform, Microsoft Fabric. The new feature offers a comprehensive SaaS solution, enabling customers to quickly analyse and act on large-scale, time-sensitive data for improved business decision-making. It integrates synapse real-time analytics and data activator. Traditionally, constructing real-time solutions has been complex and resource intensive. However, according to Microsoft CEO Satya Nadella, this new feature claims to simplify this process by offering a “unified platform” that leverages Microsoft’s Azure streaming and big data technologies. This ensures scalability, reliability, and ease of use, catering to users of all skill levels.
|
The adoption of Microsoft Fabric includes 70% of the Fortune 500, reinforcing its dominance in enterprise data management.
|
["AI News"]
|
["AI (Artificial Intelligence)", "Microsoft", "SQL"]
|
Tarunya S
|
2024-11-21T14:32:39
|
2024
| 278
|
["big data", "AI", "R", "ML", "Scala", "RAG", "Aim", "analytics", "SQL", "AI (Artificial Intelligence)", "Azure", "Microsoft"]
|
["AI", "ML", "analytics", "Aim", "RAG", "Azure", "R", "SQL", "Scala", "big data"]
|
https://analyticsindiamag.com/ai-news-updates/sql-server-now-natively-integrates-into-microsoft-fabric/
| 2
| 10
| 2
| true
| false
| false
|
10,040,060
|
Robotics As A Service Will Be The New Trend: Sangeet Kumar, Addverb Technologies
|
The automation market is projected to reach a valuation of $253 billion by 2026, growing at a CAGR of 8% during the forecast period (2021-2026), as per the Global Industrial Automation Market Outlook report. Analytics India Magazine got in touch with Sangeet Kumar, Co-founder & CEO, Addverb Technologies, to understand the ins and outs of intelligent automation and how the company is supporting its customers in implementing automated workflows. “Our products are enabled with advanced technologies such as AI, ML and deep learning, thus giving us an edge in comparison to the existing products and players in the market,” said Kumar. Excerpts: AIM: What is industrial automation? Why is it important? Sangeet Kumar: Industrial automation uses intelligent machines in operations so that the processes can be carried out with minimal human intervention. It can be achieved through several means, including mechanical, electronic, robotics, AI, ML, deep learning for leaner operation processes that require less energy, less material, and reduced labour waste. In the current era, technological advances have overcome many of the traditional limitations of robotics and automation. A new generation of flexible and versatile robots cost far less than those used in manufacturing environments today. It can be trained by frontline staff to perform tasks previously thought to be too difficult for machines— picking and packing irregularly spaced objects, resolving wiring conflicts in large-scale projects can be taken care of with the help of industrial automation. Manual work is getting replaced by smart robots. Demand for precise production without compromising on quality, increasing need for digital transformation across sectors – healthcare, transportation, retail and favourable government policies in the manufacturing sector are driving the industrial automation market. As the potential of IoT and interconnectivity is realised, the industry is expected to grow at a fast rate in the future. AIM: What are the current trends in automation? Sangeet Kumar: The automation trends that will disrupt the Industrial automation sector in 2021 include the adoption of autonomous mobile robots in the manufacturing and warehousing sector. The industry will migrate to more advanced navigation technologies such as LIDAR, RADAR & cameras. Secondly, the growth of e-commerce has catalysed the use of mobile robots in the warehouse and instigated R&D for continuous improvement. To optimise the value chain, companies are investing more in their core business and outsource the rest. Hence RaaS (Robotics as a Service) will be the new trend. Moreover, cobots with easy configuration, app-based controls and built-in safety mechanism, including power and force limiting technologies, make them safe to collaborate with human operators. Competitive pressures and onslaught of technology such as cloud, augmented reality, etc. will prompt manufacturers to look at industrial IoT solutions. Areas like remote diagnostics, predictive maintenance, fire hydrant management shall be the first areas for disruption. AIM: What is the role of AI in robotics? Sangeet Kumar: Artificial Intelligence gives robots computer vision to navigate, sense and calculate their reaction accordingly. AI-enabled robots are trained to handle repetitive tasks at inventories, logistics and supply chains, thereby reducing human work. From medical supplies to sanitisation, disinfection, and performing remote surgeries, AI makes machines more intelligent. Moreover, robotics for cargo handling speeds up the operations and performance efficiency, including baggage handling, ATRS, trolley tracking and disinfection. Similarly, AI-enabled logistics processes deliver multiple benefits to growth in minimal human intervention, combined with savings in labour cost, improvement of accuracy, and cumulative savings in energy consumption. AIM: How does Addverb leverage technologies such as AI, ML? Sangeet Kumar: We are one of the rare Indian startups which is into both hardware and software. A large chunk of 400 engineers is working in the R&D, driving our innovation through various products and building solutions that seamlessly integrate with any software or hardware in factories and warehouses. Our state-of-the-art manufacturing is the facility where “robots make robots”. With innovation being at the core of our DNA, we spend 10% of our revenue on R&D to create an extensive, affordable product portfolio for SMBs through affordable and sustainable technologies. We have recently developed our own AI engine for text-to-speech conversion, which powers our product Khushi, a voice-based order picking system for warehouses. We have also launched Veloce, a hybrid product that adds the reach of a carton shuttle and the flexibility of a mobile robot, thus proving to be the most flexible product in the warehouse automation segment. We are also working extensively on low-cost vision picking solutions and have launched multiple variants of Dynamo (500 Kg & 1 Ton) with tugging applications. AIM: Tell us about your flagship products and solutions Sangeet Kumar: We have successfully devised an array of products and technologies for industry 4.0 with solutions. In our robotics category, we provide in-house developed AMRs (autonomous mobile robot) capable of carrying loads of up to 1,500 kg in a controlled environment. We also have a UV disinfectant mobile robot, pallet shuttle robot – ideal for high inventory turnover operations and even the carton shuttle robots for movement of storage and retrieval of carton loads. Additionally, Pick by Voice, a voice-directed picking solution powered by Addverb’s NLP-based engine, offers paperless hands-free order picking and fulfilment solutions. Similarly, Pick by Vision, equipped with augmented reality, offers hands-free operation. Our Smart Conveyors help in swift material movement and have predictive maintenance capabilities as well. Our WCS Software ensures real-time tracking and tracing of the material flow inside the warehouse by interacting with all the automation equipment and optimises the material handling operation through dynamic load balancing. Our WMS software, armed with intelligent IoT, ensures effective inventory management and provides complete visibility into the end to end operations of the warehouse. Recently, we have developed a novel solution to cater to the exponential demand of e-commerce sector i.e. Micro Fulfilment Solution. These small-scale warehouse facilities located inside the cities at strategic locations enable a less than two-hour delivery from when an order is placed until it gets delivered. AIM: What are the potential applications of industrial automation? Sangeet Kumar: Manufacturing companies use technology to assemble or create products, monitor maintenance tasks, or manage inventory levels using AMRs, ASRS for pallet and carton storage and semi-automated picking technologies. Robotics will expand into the food and beverage industry, where they will perform tasks such as packaging, palletising, and filling. Similarly, the automotive industry, with its need for mass customisation of electronic goods and the re-standardisation of the semiconductor industry, is leveraging the power of robotics. New forms of progress in software, hardware, and materials development, coupled with advances in necessary infrastructural support systems, enable uniquely new and diverse Robotics and Autonomous Systems (RAS) applications in spaces like hospitals. Currently, the movement toward self-operating vehicles, both used at a commercial and personal level, will incorporate industrial automation developments. Once products are made and are ready for shipment, the distribution industry takes over. Expectations for faster delivery continue to accelerate in all areas. AIM: How can next-gen cobots unlock the power of automation for different sectors? Sangeet Kumar: Manufacturing industries such as electronics, heavy machinery, even furniture, toys, and clothing benefit from the precision and speed of smart and automated payload arms. Traditional manufacturers that handle metals, plastics, and electronics can streamline their assembly lines and get work done faster without compromising product quality. Mass-produced eatery products and wrapped food can regain a personal touch with cobot integration. Robotic arms flipping burgers, frying fries, and whipping up concoctions in a coffee shop or bar could eventually become a common sight. Additionally, pharmaceutical companies can achieve higher efficiency and lower error rates while maintaining workplace sterility in areas like research and testing and marking and packing with cobot integration. The smart features that make cobots safe around humans prove useful in warehouses, especially with regards to e-commerce distribution and fulfilment. In an interesting twist, robots are now helping today’s students learn robotics and programming faster than ever. A less-known use of cobot nowadays is in the entertainment industry, where they are used in filming to carry cameras that are too heavy for humans to handle. They are also great for situations where filming spaces are too tight for a traditional crane. AIM: Tell us about Addverb Technologies’ roadmap Sangeet Kumar: We have launched a new robot called Veloce. This product provides immense flexibility by combining the vertical reach provided by a shuttle system with the natural navigation of a mobile robot. This product will change the warehousing paradigm by increasing the picking efficiency in warehouses by 3-4 times. Also, the deployment time will crash significantly as compared to traditional automation systems. Additionally, we are looking forward to expanding globally, especially South-East Asia, the US, Europe, and Australia. We have established offices in Singapore, Australia & Netherlands and we are keen to penetrate these markets. We are expanding our presence in these geographies. We are opening an office in the US in the next three months. In the domestic market, we already have a powerful presence in FMCG, organised retail, e-commerce, grocery, beverage and tyres. For this quarter, we are keen on pharma, electronics, automobile, airports and hospitals. Many airports are coming up in India, and our mobile robots can help create highly automated, reliable, and flexible baggage handling systems. We already have an employee base of 400+, and we look to expand to around 600-700 by the end of the current financial year. We are also looking to expand and augment our current manufacturing capacity shortly.
|
The automation market is projected to reach a valuation of $253 billion by 2026, growing at a CAGR of 8% during the forecast period (2021-2026), as per the Global Industrial Automation Market Outlook report. Analytics India Magazine got in touch with Sangeet Kumar, Co-founder & CEO, Addverb Technologies, to understand the ins and outs of […]
|
["AI Features"]
|
[]
|
kumar Gandharv
|
2021-05-14T10:00:00
|
2021
| 1,568
|
["artificial intelligence", "AI", "ML", "computer vision", "RAG", "NLP", "Ray", "Aim", "deep learning", "analytics"]
|
["AI", "artificial intelligence", "ML", "deep learning", "NLP", "computer vision", "analytics", "Aim", "Ray", "RAG"]
|
https://analyticsindiamag.com/ai-features/robotics-as-a-service-will-be-the-new-trend-sangeet-kumar-addverb-technologies/
| 3
| 10
| 6
| false
| false
| true
|
10,013,913
|
GPAI’s First Summit: Top Quotes From The Event
|
The Greater Partnership on Artificial Intelligence (GPAI) recently concluded its first summit on the 4th of December. The GPAI, formed as an international and multi-stakeholder initiative, was established in 2020 with an aim to promote responsible development as well as use of AI. Initiated by France and Germany, the group now includes multiple countries across the world, including India. The first summit, held in Canada, had several sessions, including the reports on the five working groups who published the work they had done in the past couple and announced their long term vision. Apart from that, Presidents of France, Emmanual Macron and the Prime Minister of Canada, Justin Trudeau, and the Minister of Innovation, Science and Industry for Canada, Navdeep Bains, also spoke at the event. Here, we are jotting down some interesting statements made at GPAI’s first summit, in 2020. Justin Trudeau, Prime Minister of Canada “We have the technology to fight diseases, to address climate change, and to better deliver humanitarian aid. In other words, we have the technology to shape the world for the better. Let’s not forget though; positive change just doesn’t happen by itself, we have to choose it.” “If innovation has the ability to solve problems when used right, it also has the potential to create new challenges when left unchecked.” “What we see now with the world coming together on AI is demonstrating a global community respond to some of the most pressing issues of our time even as we are responding to the greatest challenge of our time around COVID.” “Citizens around the world are being increasingly impacted by AI and the desire of like-minded companies, countries and scientists to come together to figure out the rules that are going to keep Canadians and all citizens protected is really exciting.” Emmanuel Macron, President of France “Mastery of artificial intelligence requires progress on two fronts: innovation and assurance. They are inseparable and cannot function without each other.” “Technology has been a powerful driving force of human progress, and I sincerely wish that AI would enable us to progress in all the domains, particularly to overcome the challenges in healthcare and environmental sectors.” “Every day, we see clear proof of how technological progress can provoke a setback on our fundamental liberties and rights in our everyday life. Without the appropriate safeguards, technology can weaken democracy and threaten the universal values enshrined by the UN.” “With this partnership, we intend to build a digital world that would be fair, transparent, and inclusive that is open to diversity and that must not replicate everything and must not create new biases, new exclusions, and threats on fundamental freedoms and universal values.” Navdeep Bains, Minister of Innovation, Science and Industry, Canada “AI represents one of the most impactful and transformative technologies in the world today. The work stemming from this event will be instrumental in shaping the approach to adopting and governing this technology in a responsible manner.” “I firmly believe that in order for these (technological) changes to achieve the maximum global benefit, AI innovation and growth need to be harnessed by the values of human rights, inclusion, and diversity.” “We do all this (bring more algorithmic transparency) because we know that if we don’t support the transformation that AI promises to bring, in a transparent, inclusive, and collaborative way there will be no public trust to support the adoption of beneficial and potentially life-changing applications.” Joanna Shields, Co-chair, GPAI Steering Committee “We must ensure that the AI we are building is magnanimous, not malevolent. That bias is kept in check and that the future serves us all equally.” “The work we are doing at the Global Partnership for AI truly matters. Doing it properly, getting on the front foot, embedding the frameworks, the standards and the principles as we go. And putting humanity at the centre of our thinking.” “We must continue to build partnerships with more experts and governments, and harness the flourishing network of the global AI community to solve society’s biggest challenges like climate change, food insecurity, economic inequality and poor health and education outcomes.” Jordan Zed, GPAI Council President “We need to understand that there has to be, in some ways, a spirit of experimentation with which we are approaching this. It might not be the case that every use case or every project yields the kind of outcome that we think, and that’s okay. I think we need to create that safe space for us collectively to try out how we wish to approach a particular topic.” “I want to be frank and open that we have a lot of work ahead of us. There are key opportunities around ensuring inclusion and diversity in the approaches that we take, getting the details right, ensuring we have the governance to sustain ourselves, and that we are drawing on expertise even beyond GPAI membership.”
|
The Greater Partnership on Artificial Intelligence (GPAI) recently concluded its first summit on the 4th of December. The GPAI, formed as an international and multi-stakeholder initiative, was established in 2020 with an aim to promote responsible development as well as use of AI. Initiated by France and Germany, the group now includes multiple countries across […]
|
["AI Trends"]
|
["Responsible AI"]
|
Kashyap Raibagi
|
2020-12-11T10:00:00
|
2020
| 811
|
["Go", "API", "artificial intelligence", "AI", "ML", "Responsible AI", "Git", "BERT", "Aim", "Rust", "R"]
|
["AI", "artificial intelligence", "ML", "Aim", "R", "Go", "Rust", "Git", "API", "BERT"]
|
https://analyticsindiamag.com/ai-trends/gpais-first-summit-top-quotes-from-the-event/
| 3
| 10
| 1
| true
| false
| false
|
10,011,457
|
The Role Of AI Collaboration In India’s Geopolitics
|
Global power shifts in the post-Cold-War era characteristically moved away from traditional military rivalries to economic expansion and prowess. While physical resources like oil and minerals defined the geopolitics of countries for a long period, the last decade has seen, “data becoming the new oil,” and artificial intelligence (AI) as a geopolitical tool. AI helps predict weather patterns, control diseases, improve agricultural yield and help manage the complexities of the supply chain for food, medicine, and other goods, all of which have profound geopolitical implications, according to a Brookings Institute report. A country’s investment in AI has a resulting impact that is both national and global. In such scenarios, “AI collaborations between countries are no doubt beneficial like they have been with any other technology.” said Arindrajit Basu, research manager at the Centre for Internet and Society, India. “However, it is important to consider whether it is an equitable collaboration between the participants.” AI Collaboration at the Forefront of Strategic Alliances In recent times, AI and technology have played a central role in the formulation of bi-lateral and multilateral strategies and diplomatic dialogues, across countries. The Quad, which is an ‘informal strategic forum’, that features semi-regular summits between the countries India, USA, Japan and Australia, has been seen as an alliance to counter Chinese influence and alleged aggression in Asia-Pacific. Members of the Quad have undertaken various initiatives under the technology and AI front. In October this year, India and Japan finalised an agreement that provides co-operation for artificial intelligence among other strategic areas. The US-based National Security Commission on Artificial Intelligence (NSCAI), in its report, has encouraged collaboration among democracies in the Indo-Pacific region. The Commission advocates for the US to centre its Indo-Pacific relations around India, with emerging tech as a key focal point that will lead to the creation of US-India Strategic Tech Alliance (UISTA). The Global Partnership for AI (GPAI) was launched for countries to jointly work to develop responsible AI. On June 15, India joined as one of the founding members of GPAI, which also includes the rest of the three Quad countries. GPAI excludes China and hence is also seen as a means to counter China’s growing influence in the technology. While these strategic partnerships are important, the question comes to why you are leveraging them, said Basu, whose research focuses on the geopolitics and constitutionality of emerging technologies. “Something India has which China doesn’t is a vibrant democracy and functioning constitution, and that should underpin how we form our AI policy. And to that extent, being a part of a coalition of democracy, whether it is the Quad or the GPAI, if that is the way with which we are approaching the coalition then that is great,” said Basu. “But if we are approaching it as a solely military thing, that enables us to thwart China, then that might be slightly unrealistic.” India’s AI research collaborations predominantly with allies The current OECD data on AI research publications shows a similar pattern as India’s AI strategic alliances. India has published slightly over 40,000 articles till the date in collaboration with other countries. Out of which, more than 26% of research publications were in collaboration with the US and 16% with the EU-27. “While the number of research publications is a useful metric, it is important to see if these collaborations are equitable or not,” said Basu. “So for instance, is the research funded by a large American corporation to better understand the Indian market in order to monetise it, or is jointly funded by a philanthropic organisation to help citizens genuinely?” India has collaborated with China for more than 1,400 times for publishing AI research. And China, like the US, remains an important player in the field, who plan to become AI leaders by 2030. “We can take a stance in favour of democracy and against authoritarianism, but still maintain economic relationships with them. Just because you maintain an economic relationship with the country, does not mean you endorse their values,” said Basu when asked whether India can afford to lose out on Chinese collaboration as a result of strategic alliances like the Quad. In fact, “there is a strategic benefit when we are locked with a ‘soft conflict’ with them,” said Basu. Wrapping Up As India forges significant AI collaborations, which will benefit them, the country needs to champion the values and principles that it stands for, along with making sure that it is in the country’s strategic interest. Collaborations should also ensure equitable benefit to all the participants. At the same time, India should strategically continue maintaining economic relationships and participating in research with those that it does not make an official alliance with.
|
Global power shifts in the post-Cold-War era characteristically moved away from traditional military rivalries to economic expansion and prowess. While physical resources like oil and minerals defined the geopolitics of countries for a long period, the last decade has seen, “data becoming the new oil,” and artificial intelligence (AI) as a geopolitical tool. AI helps […]
|
["AI Features"]
|
["geopolitics of AI"]
|
Kashyap Raibagi
|
2020-11-10T16:00:34
|
2020
| 782
|
["Anthropic", "Go", "artificial intelligence", "programming_languages:R", "AI", "RAG", "responsible AI", "geopolitics of AI", "GAN", "AI research", "R"]
|
["AI", "artificial intelligence", "Anthropic", "RAG", "R", "Go", "GAN", "responsible AI", "AI research", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-features/the-role-of-ai-collaboration-in-indias-geopolitics/
| 2
| 10
| 1
| true
| false
| false
|
10,011,000
|
Microsoft Launches Its Google Analytics Killer
|
Recently, Microsoft had announced the general availability of the web analytics tool, known as Clarity. Microsoft Clarity is a free-to-use analytics product built to help website managers improve their website experiences by a better understanding of site visitor behaviour. Register for our upcoming webinar. According to the developers at the tech giant, with Clarity, they have built a set of tools that help people who manage websites make more informed decisions about the modifications they should make to their sites. Clarity is an open-source user behaviour analytics tool that helps you understand how users are interacting with your website through features such as session replays and heatmaps. The tool shows which parts of a website get the most and least engagement and it provides an invaluable interface for debugging. Clarity provides you with the tools to make informed decisions about changes to your website using real evidence, and also it allows you to do so in a way that helps to respect your users’ privacy and data security. According to the Microsoft Clarity team, the tool holds the following features- 1| Designed to Be Easy to Use and to Be Easy on Your Website The developers have designed it to be simple to use for developers and non-developers alike. Clarity is designed to have a very low impact on page load times in order to make sure users navigating to a site won’t have to wait for pages to load. 2| See What Clicks With Your Users Using Heatmaps Clarity provides several key capabilities including heatmaps and session-replay, as well as an insights dashboard. Heatmaps provide a visual way to examine large numbers of user interactions and they come in two forms: clickmaps and scrollmaps. 3| Align Your Expectations to Observations with Session Playback The filtering mechanism you can use to slice recordings allows you to get extremely granular about which recordings to select. The developers have used machine learning to discover patterns in session recordings like “rage clicks,” “dead clicks,” and “excessive scrolling”. 4| Use the Insights Dashboard to View Your Website Performance Clarity provides a dashboard of aggregated metrics to help you get an overall understanding of the traffic on your site. One can see how many users were clicking on non-existent links or how many people scrolled up and down a page in search of something they couldn’t readily find.
|
Recently, Microsoft had announced the general availability of the web analytics tool, known as Clarity. Microsoft Clarity is a free-to-use analytics product built to help website managers improve their website experiences by a better understanding of site visitor behaviour. Register for our upcoming webinar. According to the developers at the tech giant, with Clarity, they […]
|
["AI News"]
|
["analytics tools", "data analytics tools", "Google Analytics"]
|
Ambika Choudhury
|
2020-10-30T18:11:44
|
2020
| 393
|
["machine learning", "programming_languages:R", "AI", "data analytics tools", "RAG", "Google Analytics", "analytics", "analytics tools", "R"]
|
["AI", "machine learning", "analytics", "RAG", "R", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-news-updates/microsofts-analytics-tool-clarity-is-now-generally-available/
| 2
| 6
| 0
| false
| false
| false
|
10,046,573
|
Can Reinforcement Learning Be Used For Better Economic Policies
|
Income inequality is one of the major problems of economics. Policymakers use taxation as an effective tool to address this. In simplest terms, the government collects money from people according to their income and redistributes it, either directly or indirectly. But, developing the best tax policy is a major challenge. Economists have struggled in building the best of it, but to date, it remains an open problem. Economic methodology is limited by counterfactual data, simplistic behavioural models, and offers limited opportunities to experiment with policies. Amidst this, machine learning based economic simulation can prove to be a powerful policy and mechanism design framework that can help overcome these limitations. To this end, researchers from Salesforce, Harvard University, and You.com have attempted to design an optimal economic policy via two-level deep reinforcement learning. AI Economist using Deep Reinforcement Learning Policy optimisation has a mechanism design challenge. The government aims to find a policy under which the rational behaviour of the affected economic agents yield desired social outcome. However, theoretical approaches to policy design are limited by analytical traceability. They fail to capture the complexity of the real world. While machine learning and computational techniques for automated mechanism design hold promise for overcoming the existing challenges, thus far, there hasn’t been a general computational approach for policy design. There is a need for solving a highly non-stationary, two-level, sequential decision-making problem where all the actors are learning — while economic agents learn rational, utility maximising behaviours, the government learns optimisation of its objective through policy choices. The authors of the study, “Optimal Economic Policy Design via Two-level Deep Reinforcement Learning”, introduce a new framework — AI Economist, which combines machine learning and AI-driven economic simulation to overcome current challenges. Specifically, this technique builds on AI-driven economic simulations and two-level reinforcement learning as a new paradigm for economic policy design. This study shows that AI-driven simulations capture features of real-world economies. It does not need hand-crafted behavioural rules or simplifications for analytical tractability. The researchers used a single step economy and a multiple-step, micro-founded economic simulation called Gather-Trade-Build. This feature has multiple heterogeneous economic agents in a two-dimensional spatial environment. Gather-Trade-Build includes trading between agents and simulates the economy over extended periods of time. Gather-Trade-Build serves as a rich testbed for AI-driven policy design and is more complex than traditional tax frameworks. The AI Economist uses a two-level, deep reinforcement learning – individual agents within the economy, and at the level of the social planner. The agent and social planner use deep neural networks to implement their policy model. The two-level RL is natural in many contexts, including mechanism design, principal-agent problem, and regulating systems with unethical incentives. This system compares the performance of billions of economic designs. The AI Economist uses learning curricula and entropy-based regularisation to solve the two-level problem by providing a tractable and scalable solution. This approach stabilises training using two assumptions: The agent and social planner should be encouraged to explore and co-adaptAgents should not face high utility costs that discourage exploration during learning This approach offers the following advantages: By design, it considers actors that c-adapt with economic policy. Because of this, it doesn’t suffer from Lucas critique. As per Lucas critique, one cannot predict the effects of economic policy change based only on the relationships observed in historical data.The use of reinforcement learning provides rational agent behaviour.Since the simulation framework is flexible, it supports a configurable number of agents and offers various choices in economic processesThe designer can choose any policy objective, and it needn’t be analytically tractable or differentiable.The use of RL does not require knowledge of simulation or economic theory. Previous Solutions This isn’t the first time Salesforce is playing with the idea of AI Economist. In fact, the team introduced it in 2020. This version used reinforcement learning to tax research to provide a simulation and data-driven solution. It used a collection of AI agents that were designed to simulate how real people may react to different taxes. In the simulation, each agent earned money by collecting and trading resources and building houses. The agents here maximise their utility by adjusting movement, building, and trading behaviour. Simultaneously, the AI Economist optimises taxes and subsidies to promote global objectives.
|
AI Economist combines machine learning and AI-driven economic simulation to overcome current challenges.
|
["IT Services"]
|
["Deep Reinforcement Learning", "Reinforcement Learning", "Salesforce"]
|
Shraddha Goled
|
2021-08-23T16:00:00
|
2021
| 705
|
["Go", "machine learning", "Reinforcement Learning", "AI", "neural network", "data-driven", "Scala", "RAG", "Deep Reinforcement Learning", "Aim", "Salesforce", "R", "Redis"]
|
["AI", "machine learning", "neural network", "Aim", "RAG", "Redis", "R", "Go", "Scala", "data-driven"]
|
https://analyticsindiamag.com/it-services/can-reinforcement-learning-be-used-for-better-economic-policies/
| 3
| 10
| 0
| false
| false
| true
|
10,080,571
|
The Real Reason Behind Koo’s Popularity in Brazil
|
Earlier this week, Koo founder Aprameya Radhakrishna said that the app was gaining attraction in Brazil following the Twitter drama. Several media houses and publications also reported the popularity of the homegrown Koo app ranking at the top in Brazil—48 hours after its initial launch in the country. “Braaasiilll!!! Lalalalalalala!!!” tweeted the founder, unbeknownst of the real meaning of the word ‘Koo’ in Portuguese, widely spoken in Brazil. https://twitter.com/softwaronnie/status/1593719995875426309?s=20&t=2k2Mzp-OeX4DITGEthEo3w Read: Top 10 Open-Source Twitter Alternatives On Monday, the company stated in a press release that the multilingual app entered the South American country after adding Portuguese as a language option in its interface. Koo is currently available in 11 languages in select countries. The exact launch date of the app in the Portuguese-speaking market remains undisclosed. However, the release stated that the micro-blogging platform was downloaded over a million times in the span of two days within Brazil. Treating Twitter as its ubiquitous competitor, Koo was founded by CEO Aprameya Radhakrishna in 2020. Analytics India Magazine recently got in touch with the platform’s chief technology officer Phaneesh Gururaj, who said that Koo is one of the first platforms to have unveiled Voluntary Self-Verification and publish the workings behind its algorithms—reiterating platform transparency and following a user-first approach. Addressing the launch in a market of over 160 million internet users, co-founder Mayank Bidawatka said that the platform is the first to create a language-immersive interface for connecting with the world. He also said that only 20% of the population of the world converses in English, leaving the 80% of them speaking in languages native to their own lands. Analytics firm Sensor Tower cited that Koo witnessed over 973,000 installs between November 14 and November 20, 2022. The platform currently features eccentric Brazilian celebrities such as actor Babu Santana, singer Claudia Leitte, and author Rosana Hermann. The press release further stated that Koo looks forward to helping the company position itself globally, by launching the platform in several other countries and multiple global languages. Shaky attraction? Major gaming company ‘Razer’ has also participated in the backside-related jokes on both Twitter and Koo. However, this is how Koo reacted to the joke, explaining that it is merely the ‘sound of a cute yellow bird’. https://twitter.com/KooForBrasil/status/1593659854236762113?s=20&t=ZFVZ5vtsXq5z8N4_Dv9RlQ However, within a few hours of launch, users in Brazil faced issues accessing the yellow bird site, which also included bugs that did not allow users to see their followers. The homegrown app has also faced security and moderation problems as soon as it hit the Portuguese market. It was reported that a few hackers had taken control of Felipe Neto’s account, a popular influencer in Brazil—warning users of Koo’s lack of security. Koo later explained that they had relaxed their security to address the OTP issue and promised that all user data was well protected. In another thread, Neto said that Koo’s founder got in touch with him and resolved the issue within minutes. Will Koo rebrand? Another such example is how the e-commerce firm UrbanClap rebranded itself to Urban Company, as the word ‘clap’ entails a rather negative connotation in certain Western cultures. Co-founder Mr Abhiraj Bhal stated that the company plans to make an umbrella brand, housing all their sub-brands in the country. He added that the name will be carried to all international markets such as Dubai, Abu Dhabi, Singapore, and Sydney. Another worldwide e-commerce platform Myntra recently changed its logo, due to a complaint lodged by a Mumbai-based activist who said that the logo was a signage that was ‘insulting’ and ‘offensive’ towards women. Considering its rising popularity and increasing user base, Koo has undoubtedly become one of the best alternatives to Twitter after Elon Musk’s acquisition of the platform earlier this year. But is it time for the platform to make changes and rebrand itself?
|
Koo became the top app after 48 hours after its initial launch.
|
["AI News"]
|
["Koo", "Twitter (X)"]
|
Bhuvana Kamath
|
2022-11-23T18:21:37
|
2022
| 633
|
["Go", "programming_languages:R", "AI", "programming_languages:Go", "analytics", "Twitter (X)", "R", "Koo"]
|
["AI", "analytics", "R", "Go", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-news-updates/the-real-reason-behind-koos-popularity-in-brazil/
| 4
| 6
| 1
| false
| true
| false
|
64,707
|
How COVID-19 Crisis Has Impacted The IT Spending
|
The unprecedented nature of the health crisis has ripple effects on industries — whether it be to travel and tourism, hospitality, healthcare, consumer electronics, banking and finances, or technology industry. This crisis has not only pushed companies to gyrate through a new set of challenges but also forced them to alter their business strategies. According to experts, one of the biggest impacts has been on the information technology industry, where firms are struggling with their business continuity amid this crisis. Issues such as disruption in the supply chain, remote working of employees, and the unpredictability of the crisis have created a direct impact on IT spending. Not only is it slowing down the tech spending but also the short-term business investments. In fact, in a recent report, it has been stated that worldwide IT spending is expected to decline by 5.1% this year to $2.25 trillion, due to this pandemic outbreak. The report further noted that ICT spending, which includes telecom and business services, is also expected to decline by 3.4% in 2020. With the industry going under major disruption with businesses have slowed down their spending on hardware, software and IT services, which in turn can cause a downturn in the economy. Due to the extended lockdown, many employees in big organisations aren’t able to finish their projects, which again can create a massive decline in IT services spending. According to Stephen Minton, program vice president of IDC’s Customer Insights & Analysis group, “Inevitably a major economic recession, in Q2 especially, will translate into some big short-term reductions in IT spending by those companies and industries that are directly impacted.” He further stated that this is due to many reasons — “Some firms will cut capital spending and others will either delay new projects or seek to cut costs in other ways.” The report further stated that the overall spending on devices like PCs and phones is also expected to be down drastically this year, which has been the major impact on IT spending this year. Growing spending in IT infrastructure service Although this pandemic is having a negative impact for many, experts believe that the crisis has turned out to be beneficial for cloud service providers, as employees are mandated to work from home, and students are expected to stay home and rely on broadband networks for continuing their studies. These changes have forced companies to advance their investments towards cloud computing in order to aid their employees with the necessary resources to work from home amid lockdown. This increasing demand for cloud services among employees has also urged companies to enhance their IT infrastructures and therefore that’s another area businesses are currently investing on. Working on cloud projects can also help businesses in cutting down costs, reducing CapEx and upgrading to advanced technologies. Considering companies are currently working with a huge amount of data, and therefore it has become imperative for them to have a software-as-a-service in order to manage the data, visualise it as well as analyse it. This has benefitted cloud giants like Amazon, Microsoft and Google, as many organisations are moving beyond their traditional data centres to adopt cloud. Alongside, as remote working becoming critical for organisations, there has been a huge dependency on emerging communication technologies by businesses in order to accelerate their process. Communication and video conferencing tools like Zoom, MS Teams, Skype, WebEx have seen increased demand by becoming the key communication tools between employees and peers amid this lockdown. In fact in another survey done by Analytics India Magazine, it has been revealed that 54.5% of the employees utilised MS Teams to remotely connect and network with other members of their analytics teams and 13.6% of the employees utilised the conferencing tool Zoom. Besides, many tech companies are continuously advancing their technologies to provide value to their customers. Many industries such as healthcare as well as banking and finances are in dire need of advanced solutions to enhance their business productivity, and that’s where tech companies have come into the picture. IBM has recently launched their supercomputer with huge computing power to help analysts better understand the disease and its impacts. On the other hand, SAP has also launched several initiatives to enhance their customers’ supply chain issues. The company has also opened access to various of its solutions to also help with their business continuity. Wrapping up Although the pandemic has created a huge disruption in the IT industry, it has also created opportunities for cloud service providers as well as businesses that are working with emerging technologies like AI, analytics, and robotics. Emerging technologies have various benefits in disease detection and prevention, which, in turn, provided IT vendor companies with the opportunity to come up with solutions to help their customers fighting this battle. Besides, a lot of startups that are investing in artificial intelligence as well as other emerging technologies to cut down their business cost and have a smooth operation amid the crisis.
|
The unprecedented nature of the health crisis has ripple effects on industries — whether it be to travel and tourism, hospitality, healthcare, consumer electronics, banking and finances, or technology industry. This crisis has not only pushed companies to gyrate through a new set of challenges but also forced them to alter their business strategies. According […]
|
["AI Features"]
|
["AI (Artificial Intelligence)", "Cloud services", "Machine Learning Algorithms"]
|
Sejuti Das
|
2020-05-07T10:43:25
|
2020
| 827
|
["Machine Learning Algorithms", "Go", "API", "artificial intelligence", "AI", "cloud computing", "Cloud services", "ViT", "analytics", "disruption", "GAN", "R", "AI (Artificial Intelligence)"]
|
["AI", "artificial intelligence", "analytics", "cloud computing", "R", "Go", "API", "GAN", "ViT", "disruption"]
|
https://analyticsindiamag.com/ai-features/how-covid-19-crisis-has-impacted-the-it-spending/
| 3
| 10
| 3
| false
| false
| false
|
10,082,998
|
AWS Announces Local Zone in Kolkata
|
AWS recently announced the general availability of the Local Zones in Bangkok and Kolkata. The Local Zone will be used to deliver applications that require single-digit millisecond latency or local data processing. AWS Local Zones are a form of infrastructure deployment that place compute, storage, databases, and other chosen AWS services close to a large population and industrial centres. This comes after AWS announced at the beginning of the year that it would launch Local Zones in over 30 metro areas across 27 countries outside of the US at the beginning of the year. Along with the newly-announced Local Zones in Kolkata, AWS also has Local Zones in Delhi. Additionally, a few weeks ago, AWS announced that it has set up a data centre in Hyderabad. Government, education, and non-profit organisations, along with startups and entrepreneurs, will be able to exercise greater choices for using Indian data centres to host apps and provide customer service. Guru Bala, head of solutions architecture at AWS Specialised Services, told AIM, “The project will provide customers with more flexibility and choice, while allowing them to architect their infrastructure for even greater fault tolerance, resiliency, and availability across geographic locations.” Minister of State for Electronics and IT Rajeev Chandrasekhar said recently that India is set to become a global cloud computing and data centre hub. The instalment of Local Zones further positions India strongly in this direction.
|
After Delhi, AWS announces second Local Zone in Kolkata
|
["AI News"]
|
["AWS", "Cloud Computing", "data centre"]
|
Ayush Jain
|
2022-12-21T18:07:09
|
2022
| 232
|
["Go", "AWS", "AI", "cloud computing", "Git", "RAG", "Aim", "Cloud Computing", "GAN", "R", "data centre", "startup"]
|
["AI", "Aim", "RAG", "cloud computing", "AWS", "R", "Go", "Git", "GAN", "startup"]
|
https://analyticsindiamag.com/ai-news-updates/aws-announces-local-zone-in-kolkata/
| 3
| 10
| 1
| false
| false
| false
|
10,040,337
|
Make Python Code Faster With Numba
|
One of the major complaints that people, mostly die-hard C++ users, have with Python is that it’s slow. Yes, Python is a dynamically typed interpreted language and it is slow. Most people don’t know that Python can provide you direct access to your hardware to perform intensive calculations. Numba is an open-source Just-In-Time compiler that does exactly that. It enables Python developers to translate a subset of Python and NumPy code directly into machine code by using the LLVM compiler in the backend. In addition to that, Numba offers a wide range of choices for parallelizing Python code for CPUs and GPUs with trivial code changes. There are a lot of ways to approach compiling Python; the approach Numba takes is to compile individual functions or a collection of functions just in time as you need them. Source: https://www.youtube.com/watch?v=-4tD8kNHdXs&t=1167s Numba takes the bytecode of your function and looks at the types of arguments you pass to it. The arguments, supported by Python objects, are translated into representations with no CPython dependencies. This process is called “unboxing”. Once Numba has these two things, it goes down an analysis pipeline to figure out the types of everything inside the function based on what’s passed in. It then generates an intermediate representation (IR) of what the function is doing, filling in all the data types and all that kind of stuff. LLVM is responsible for most of the hard work; it inlines functions, auto vectorize loops, does other low-level code optimization expected by a C compiler and generates the machine code. This machine code is cached so that the next time the function is run, Numba doesn’t need to go through this whole pipeline but instead skip to the end. An important thing to note is that Numba doesn’t interact with or change the interpreter. This means it can only optimize what’s locally possible in the function; for instance, it can’t go to other parts of your program and say that “Oh, the operation would be a lot faster if this list was a NumPy array”. Another thing Numba does is that it looks for built-in and NumPy methods and swap them out with its own implementation. Using Numba to make Python & NumPy code faster Numba can be installed from PyPI as: pip install numba Numba uses decorators to convert Python functions into functions that compile themselves. The most common Numba decorator is @jit. Let’s create an example function and see @jit in action. @jit(nopython=True) def example_function(n): trace = 0.0 for i in range(n.shape[0]): trace += np.tanh(n[i, i]) return n + trace The nopython=True option tells Numba to fully compile the function to remove the Python interpreter calls completely. If it is not used, exceptions are raised, indicating places in the function that need to be refactored to achieve better-than-Python performance. Using nopython=True is strongly recommended. We’ll be using the %timeit magic function to measure execution time because it runs the function multiple times to get a more accurate estimate of short functions. Our function has not been compiled yet; to do that, we need to call it: n = np.arange(10000).reshape(100, 100) %timeit example_function(n) The slowest run took 20086.53 times longer than the fastest. This could mean that an intermediate result is being cached. 1 loop, best of 5: 11.9 µs per loop The function was compiled, executed and cached. Now when it is called again, the previously generated machine code is executed directly without any need for compilation. %timeit example_function(n) The slowest run took 4.89 times longer than the fastest. This could mean that an intermediate result is being cached. 100000 loops, best of 5: 11.8 µs per loop When benchmarking Numba-compiled functions, it is important to time them without including the compilation step since the compilation of a given function only happens once. Let’s compare to the uncompiled function. Numba-compiled functions have a .py_func attribute that can be used to access the original uncompiled Python function. %timeit example_function.py_func(n) The slowest run took 6.77 times longer than the fastest. This could mean that an intermediate result is being cached. 1000 loops, best of 5: 239 µs per loop The original Python function is more than 20 times slower than the Numba-compiled version. However, our example function used explicit loops, which are very fast in Numba and not so much in Python. Our function is really simple so we can try optimizing it by rewriting it using only NumPy expressions: def numpy_example(n): return a + np.tanh(np.diagonal(n)).sum() %timeit numpy_example(n) The slowest run took 8.53 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 5: 29.2 µs per loop The refactored NumPy version is roughly 10 times faster than the Python version but still slower than the Numba-compiled version. Multithreading with Numba Operations on NumPy array expressions are often broadcasted independently over the input elements and have a significant amount of implied parallelism. Numba’s ParallelAccelerator optimization identifies this parallelism and automatically distributes it over several threads. To enable the parallelization pass, all we need to do is use the parallel=True option. SQRT_2PI = np.sqrt(2 * np.pi) @jit(nopython=True, parallel=True) def gaussians(x, means, widths): n = means.shape[0] result = np.exp( -0.5 * ((x - means) / widths)**2 ) / widths return result / SQRT_2PI / n Let’s call the function once to compile it: means = np.random.uniform(-1, 1, size=1000000) widths = np.random.uniform(0.1, 0.3, size=1000000) gaussians(0.4, means, widths) Now we can accurately compare the effect of threading and compiling with the normal Python version: gaussians_nothread = jit(nopython=True)(gaussians.py_func) %timeit gaussians(0.4, means, widths) # numba-compiled and threading %timeit gaussians_nothread(0.4, means, widths) # no threading %timeit gaussians.py_func(0.4, means, widths) # normal python 10 loops, best of 5: 20.3 ms per loop 1 loop, best of 5: 26.1 ms per loop 10 loops, best of 5: 28.4 ms per loop There are situations suited for multithreading where there’s no array expression but rather a loop where each iteration is independent of the other. In these cases, we can use prange() in a for loop to indicate to ParallelAccelerator that this loop can be executed in parallel: import random # Serial version @jit(nopython=True) def monte_carlo_pi_serial(nsamples): acc = 0 for i in range(nsamples): x = random.random() y = random.random() if (x**2 + y**2) < 1.0: acc += 1 return 4.0 * acc / nsamples # Parallel version @jit(nopython=True, parallel=True) def monte_carlo_pi_parallel(nsamples): acc = 0 # Only change is here for i in numba.prange(nsamples): x = random.random() y = random.random() if (x**2 + y**2) < 1.0: acc += 1 return 4.0 * acc / nsamples %time monte_carlo_pi_serial(int(4e8)) %time monte_carlo_pi_parallel(int(4e8)) CPU times: user 5.07 s, sys: 23.9 ms, total: 5.09 s Wall time: 5.06 s CPU times: user 9.41 s, sys: 17 ms, total: 9.43 s Wall time: 4.9 s The above code implementations have been taken from the official tutorial binder available here. One thing to note here is that prange() automatically handles the reduction variable acc in a thread-safe way. Additionally, Numba automatically initializes the random number generator in each thread independently. Alternatively, you can also use modules like concurrent.futures or Dask to run functions in multiple threads. For these use-cases, ParallelAccelerator isn’t helpful; we only want to obtain the Numba-compiled function to run concurrently in different threads. For accomplishing this, we need the Numba function to release the Global Interpreter Lock (GIL) during execution. This can be done using the nogil=True option. I highly recommend watching Gil Forsyth’s SciPy 2017 talk on Numba; if you want a more in-depth understanding of Numba or in true Numba time-saving spirit, just refer to the documentation.
|
Numba is an open-source Just-In-Time compiler that enables Python developers to translate Python and NumPy code directly into machine code.
|
["AI Trends"]
|
["AI Tool"]
|
Aditya Singh
|
2021-05-19T15:00:00
|
2021
| 1,261
|
["Go", "NumPy", "programming_languages:R", "AI", "Python", "Ray", "C++", "Dask", "programming_languages:Python", "AI Tool", "R"]
|
["AI", "Ray", "NumPy", "Dask", "Python", "R", "Go", "C++", "programming_languages:Python", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-trends/make-python-code-faster-with-numba/
| 4
| 10
| 0
| true
| true
| false
|
10,050,029
|
Amazon Reveals AZ2 CPU Chip At Fall Hardware Event 2021
|
Amazon has unveiled its new smart home device Echo Show 15. During the announcement of the wall-mounted device, Amazon also revealed its new CPU. Known as the AZ2 Neural Edge CPU, residing inside the upcoming Echo Show 15, the CPU uses four cores and is 22 times as many TOPS (trillions of operations per second) faster than the last-gen chip. The AZ2 possesses the ability to process machine-learning-based speech models “significantly faster,” according to Amazon’s announcement. The Echo Show 15 gets its vision processing capabilities from the new AZ2 processor. Last year’s Echo and Echo Dot speakers had the AZ1 chip, allowing enhanced voice processing. The CPU chip can process what the camera is seeing; all the computer vision processing happens on the device without sending any data to the cloud. Visual ID is optional and turned off by default. The information the AZ2 learns about your face will be part of the Visual ID and requires users to enrol in this feature specifically. This enables the Echo Show 15 to recognize the user and display custom content based on their Alexa profile. Amazon claims it’s the first company to offer this kind of privacy-first technology on smart speakers, and the Visual ID technology is also built with privacy at its foundation. Just like the AZ1, the AZ2 can handle speech recognition and CV workloads simultaneously. Currently, the Echo Show 15 is the only piece of Amazon hardware known to be including the AZ2. Still, one can expect it to become a keystone piece of technology in Amazon devices as we advance. The Echo Show will launch sometime this year for $250.The full specifications and the official public launch for the Echo Show and the AZ2 CPU are currently not available yet.
|
The AZ2 possesses the ability to process machine-learning-based speech models “significantly faster,” according to Amazon’s announcement.
|
["AI News"]
|
["AI (Artificial Intelligence)", "Amazon", "Amazon Echo", "Data Science", "Deep Learning", "Machine Learning"]
|
Victor Dey
|
2021-09-29T19:24:35
|
2021
| 292
|
["programming_languages:R", "AI", "Amazon Echo", "Amazon", "Machine Learning", "computer vision", "Aim", "ai_applications:computer vision", "Deep Learning", "Data Science", "R", "AI (Artificial Intelligence)"]
|
["AI", "computer vision", "Aim", "R", "programming_languages:R", "ai_applications:computer vision"]
|
https://analyticsindiamag.com/ai-news-updates/amazon-reveals-az2-cpu-chip-at-fall-hardware-event-2021/
| 3
| 6
| 0
| false
| false
| false
|
10,136,692
|
[Exclusive] Tech Mahindra to Announce Project Indus 2 Soon
|
Nikhil Malhotra, chief innovation officer at Tech Mahindra, announced at Cypher 2024, India’s biggest AI conference, that the company will launch Project Indus 2 within a couple of months. “NVIDIA is a great partner, and you should all look forward to next month when we plan to launch Project Indus 2. We’re going to make it state-of-the-art in terms of Hindi and its various dialects, using a modern, open-source approach. I developed this model to raise awareness and show the world that India has the capability to achieve this,” said Malhotra. Most recently, Malhotra also met NVIDIA chief Jensen Huang and the team at NVIDIA’s offices to discuss sovereign LLMs. Tech Mahindra launched Project Indus earlier this June through its R&D arm, Makers Lab. This AI model is built to converse in multiple Indic languages and dialects, starting with Hindi and its 37+ variations. The model has 1.2 billion parameters and was trained on 22 billion tokens. It is built on GPT-2 and designed to handle the complexities of the Hindi language. Furthermore, Malhotra shared that when they began working on Project Indus, the goal was not to focus on any specific language but to address various dialects. “India is home to 24 mother tongues and 1,645 dialects. Officially, the country speaks 19,200 dialects, some of which have become extinct or are endangered,” he said. “It’s not just about Hindi. It’s going to include variations of Hindi like Dogri, as well as languages such as Pancha Pargania and Magahi,” he added. He said that Project Indus can work on AI PCs locally without the requirement of GPUs. “The model is benchmarked on Intel Xeon servers, not GPUs. Since it’s based on Intel Xeon, our model runs on these AI PCs.” Malhotra revealed that they are not in the race of building 175 billion parameter models. “We are not in a race of saying you’re building 170 billion parameters, because most of my research over the last 20 years in language says that after 3 billion to 4 billion parameters, the model knows the language,” he explained. Era of Sovereign LLMs Malhotra said that it’s not just India that is working on sovereign LLMs. “There are countries in Southeast Asia that have now taken the lead. All of them want to build large language models for their country, using their data hosted within the country. These models will reflect their culture, be aware of biases within that culture, and align with their country’s vision,” he said. Moreover, Malhotra shared that Tech Mahindra has built the first sovereign LLM for Indonesia, called Garuda. “This is a big model. It’s about an eight to nine billion parameter model. It’s actually trained on the entire Nvidia stack of Nemo,” said Malhotra. Earlier this year, Tech Mahindra partnered with Indosat Ooredoo Hutchison to Build Garuda, an LLM for Bahasa Indonesia and its Dialects. “The era of sovereign LLMs has just begun,” he said, adding that soon many countries, including Malaysia, Australia, and New Zealand, will be building indigenous LLMs that understand local languages and not just English. What’s Next? Malhotra said that one of the things he wants to do with AI, and one of his research goals, is to explore how AI can become less compute-intensive. Furthermore, he also unveiled his innovative concept for AI development, which he calls the ‘min-max regret model.’ This approach shifts away from traditional reward models and aims to empower AI to “dream” about its capabilities and understand its existence in a more profound way. Malhotra explained that this dreaming model allows AI to contemplate its own questions and aspirations. “Can you dream about yourself? Once you develop your dream, now come back to life and start with the life that you have,” he said, highlighting the physical aspect of existence and how it relates to AI. Drawing a parallel between human cognition and AI functionality, Malhotra said that just as humans subconsciously store information in the hippocampus, AI systems will use their ‘memory’ to inform decision-making processes. “A lot of the data that you collect to do a lot of your information still resides at the back of the hippocampus, which is a memory. And as a result, you pull out that memory when you have to cycle,” he elaborated, likening it to learning to ride a bike—an instinctual skill honed from childhood that remains stored in our subconscious.
|
“Project Indus was built for $400,000 in response to OpenAI’s chief, Sam Altman, who claimed that India couldn’t build a model for under $10 million.”
|
["IT Services"]
|
["Tech Mahindra"]
|
Siddharth Jindal
|
2024-09-26T18:24:14
|
2024
| 731
|
["Go", "Tech Mahindra", "programming_languages:R", "AI", "innovation", "programming_languages:Go", "GPT", "Aim", "GAN", "R", "llm_models:GPT"]
|
["AI", "Aim", "R", "Go", "GPT", "GAN", "innovation", "llm_models:GPT", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/it-services/exclusive-tech-mahindra-to-announce-project-indus-2-soon/
| 3
| 10
| 0
| true
| false
| false
|
10,166,301
|
NVIDIA Announces 2 Personal Supercomputers—One is as Small as Mac Mini
|
NVIDIA has announced two new personal AI supercomputers to handle AI workloads at the GPU Technology Conference (GTC) 2025 event. The company announced the DGX Spark and DGX Station, which are powered by the NVIDIA Grace Blackwell platform. These are aimed at helping AI developers, researchers, data scientists, and even students prototype, fine-tune, and infer from large language models on a desktop. Models can be run locally or deployed on any cloud-based platform. “DGX Spark and DGX Station bring the power of the Grace Blackwell architecture, previously only available in the data centre, to the desktop,” the company said. Original equipment manufacturer (OEM) partners like ASUS, Dell, HP and Lenovo are set to develop DGX Spark and the DGX Station. The DGX Spark, formerly known as ‘Project DIGITS’, is dubbed the world’s smallest AI supercomputer. It can deliver 1,000 trillion operations per second (TOPS) using AI models. It is priced at $3,000. The NVIDIA DGX Spark (5.91″ × 5.91″ × 1.99″) is only slightly larger than the Apple Mac Mini (5.00″ × 5.00″ × 1.96″). On the other hand, the more powerful NVIDIA DGX Station features a massive 784 GB memory to accelerate AI workloads. It is also the first desktop system built with the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip. NVIDIA says the DGX Stations are purpose-built for teams who need the best desktop AI development platform. “This is the computer of the age of AI. This is what computers should look like,” said CEO Jensen Huang in the keynote. The company has opened reservations for DGX Spark Systems. The DGX Station is expected to be available later this year. Besides, NVIDIA had plenty of announcements to make at the GTC event. The company also partnered with General Motors (GM) to develop AI-powered self-driving cars and HALOS, a new AI-enabled automotive safety platform. Furthermore, NVIDIA also announced plenty of updates in the robotics sector. Earlier this year, the company unveiled the GeForce RTX 5090 GPU, which is 30% smaller in volume and 30% better at energy dissipation than the RTX 4090.
|
Project DIGITS has been rebranded as DGX Spark, and the company has announced a new DGX Station.
|
["AI News"]
|
["AI (Artificial Intelligence)", "NVIDIA"]
|
Supreeth Koundinya
|
2025-03-19T12:56:09
|
2025
| 343
|
["programming_languages:R", "AI", "data_tools:Spark", "Git", "RTX 4090", "Aim", "ai_applications:robotics", "NVIDIA", "R", "AI (Artificial Intelligence)"]
|
["AI", "Aim", "R", "Git", "RTX 4090", "programming_languages:R", "data_tools:Spark", "ai_applications:robotics"]
|
https://analyticsindiamag.com/ai-news-updates/nvidia-announces-2-personal-supercomputers-one-is-as-small-as-mac-mini/
| 4
| 8
| 0
| false
| false
| false
|
21,876
|
Govt To Release Landmark Industrial Policy In May, To Focus On New Tech: Reports
|
File photo of Suresh Prabhu, Union Minister for Commerce and Industry, Government of India. Union Commerce and Industry Minister Suresh Prabhu may release a new industrial policy this May. The policy, which will set a record for being the third major intervention after the industrial policies of 1956 and 1991, will reportedly seek to promote emerging sectors. According to a report, the draft for the proposed policy has already been released by the Commerce and Industry Ministry for consultation with various stakeholders. Industry insiders are saying that the new policy will completely revamp the Industrial Policy of 1991. Government sources told a news agency that the new policy would focus on Industrial Revolution 4.0 and would give a lot of importance to new technologies such as artificial intelligence, robotics, deep learning and internet of things. “The ministry is holding consultation with states also on the new policy to see their best practices. It will be released soon,” the source added. The Department of Industrial Policy and Promotion (DIPP) in August 2017 floated a draft industrial policy with an aim to create jobs for the next two decades, promote foreign technology, and transfer and attract $100 billion FDI annually. A financial magazine reported that the new industrial policy would concentrate on the following five key points: Strengthen ease of doing business and reduce compliance costs for the industry. Make provisions which will give weightage to the quality of foreign direct investment, with a preference to investments that will create local value additions, like, jobs. Make provisions for rationalisation of electricity cost for industries. Incentivise research and development with the objective of positioning India as a test bed for emerging technologies and creating an environment for ease of innovation. Help “medium”-sized enterprises in the SME sector. If the recently-released Union Budget is anything to go by, the above-mentioned changes, as well as the announcement of the new landmark industrial policy are not far off. Union Finance Minister Arun Jaitley had in his speech had particularly stressed on transforming India into digital economy, with the help of cutting-edge technologies in the digital space. “Technologies like ML, AI, IoT, 3D printing, and initiatives like Digital India, Startup India and Make in India would help in establishing itself as a digital economy”, Jaitley had said. Industry insiders are happy to see that the Modi-led NDA government is working towards supporting a tech-driven future. This year, Union Finance Minister Arun Jaitley has also doubled the allocation on Digital India programme to ₹3,073 crore in 2018-19. Keeping in mind the current government’s long-term plans, reports have suggested that the Modi-led BJP will strive to achieve its commitment towards Sustainable Development Goals (SDGs) with the help of AI by 2030, as it has the potential to churn out a slew of applications while keeping in mind the quality of approach.
|
Union Commerce and Industry Minister Suresh Prabhu may release a new industrial policy this May. The policy, which will set a record for being the third major intervention after the industrial policies of 1956 and 1991, will reportedly seek to promote emerging sectors. According to a report, the draft for the proposed policy has already […]
|
["AI News"]
|
["arun jaitley", "BJP", "government of india", "Narendra Modi", "suresh prabhu"]
|
Prajakta Hebbar
|
2018-02-19T10:56:12
|
2018
| 472
|
["government of india", "Go", "artificial intelligence", "AI", "innovation", "ML", "arun jaitley", "Narendra Modi", "Git", "BJP", "Aim", "deep learning", "suresh prabhu", "R", "startup"]
|
["AI", "artificial intelligence", "ML", "deep learning", "Aim", "R", "Go", "Git", "innovation", "startup"]
|
https://analyticsindiamag.com/ai-news-updates/new-industrial-policy-suresh-prabhu/
| 3
| 10
| 4
| false
| false
| false
|
45,074
|
IIT Guwahati Launches M.Tech Programme In Data Science
|
Indian Institute of Technology Guwahati (IIT-G) has launched four programmes, including a Master’s course in Data Science. These programmes were introduced by IIT-G to address the rising demand by the students. These courses which begin from the current academic session 2019-20. The institute has also launched three new programmes in association with Gifu University, Japan. Calling the new programmes ‘trendsetter’ and ‘gamechanger’, Prof TG Sitharam, Director at IIT Guwahati, told a leading news portal, “The focus of the programmes is on the study, invention, and creative use of technologies to create effective, usable, entertaining experiences with technology through interdisciplinary research in engineering, design, behavioural and social sciences, and to understand the impact of technology on individuals, groups, and organisations. The institute envisions to produce successful graduates who will be capable of leading the changing scenarios of tomorrow through innovation, in-depth thought and values.” The other new programmes launched are: International Joint Master’s Degree Program in Food Science and Technology International Joint PhD Program in Food Science and Technology International Joint PhD Program in Integrated Mechanical Engineering Last week, IIT-G has made news for signing a memorandum of understanding with RD Grow Green India, an incubating company at the Technology Incubation Centre in IIT Guwahati, to work on treatment of contaminated drinking water and industrial effluent.
|
Indian Institute of Technology Guwahati (IIT-G) has launched four programmes, including a Master’s course in Data Science. These programmes were introduced by IIT-G to address the rising demand by the students. These courses which begin from the current academic session 2019-20. The institute has also launched three new programmes in association with Gifu University, Japan. […]
|
["AI News"]
|
["Data Science", "IIT Guwahati"]
|
Prajakta Hebbar
|
2019-08-27T18:42:47
|
2019
| 215
|
["data science", "programming_languages:R", "AI", "innovation", "IIT Guwahati", "GAN", "Data Science", "R"]
|
["AI", "data science", "R", "GAN", "innovation", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-news-updates/iit-guwahati-launches-m-tech-programme-in-data-science/
| 2
| 6
| 0
| false
| false
| true
|
10,048,216
|
Data Science Hiring Process At Gupshup
|
Gupshup, which entered the unicorn club this year, is one of the leading conversational messaging platforms in the country, powering over six billion messages per month. Thousands of large and small businesses currently use Gupshup to build conversational experiences across verticals, including marketing, sales, support, etc. Gupshup’s platform provides a single messaging API for 30+ channels, a conversational experience-building tool kit for any use case, and a network of emerging market partnerships across messaging channels, device manufacturers, ISVs and operators. Thanks to Gupshup, many businesses have made conversations an integral part of their customer engagement success. In June 2021, the company took hundreds of restaurant outlets online across India with conversation commerce. Leveraging the Gupshup IP (GIP) messaging platform, Gupshup helped the restaurants digitise the customer experience to order, pay and receive home delivery directly. Gupshup provided restaurants with messaging-based marketing solutions to promote offers, specials, and deals to their regular customers and has made restaurant management simpler with mobile-based management tools. That, in a way, helped restaurants get more direct business from customers, implement touchless dining for safety, and own their customer relationship through their marketing efforts. The brain behind its innovative technology services and solutions led us to Gupshup’s data science team. “Since AI/ML is at the heart of our whole product lineup, it is present in virtually all of our services,” said Beerud Sheth, CEO and co-founder at Gupshup. Gupshup currently has over 100 million users using its AI models to improve their messaging experiences. The co-founder said that the conversational assistants and document cognition-based AI found on Gupshup’s website have contributed significantly. The company told Analytics India Magazine that it is on a hiring spree and looks to expand its data science team in the coming months. “We have a huge pipeline of AI use cases that we want to turn around. The amount of data that we have is increasing at a fast pace and, along with that, demand for value delivering AI use cases. The data scientists will be working on those use cases,” said Sheth. Team Structure Gupshup has a data science team of 10 members consisting of senior data scientists with about four reportees, including junior data scientists, machine learning engineers, etc. This data science team directly reports to the director of AI. Interview Process The hiring process for data science roles at Gupshup is quite simple. It consists of four levels: Resume screening 10 minutes phone-screen roundCoding assignment Panel interview “At Gupshup, we focus on accuracy and hence, the most important key result area (KRA) is model accuracy. Key performance area (KPA) also includes user adoption and manufacturing quality. The number of use-cases launched to production is used to calculate the turnaround time from inception to production,” said Sheth. Expectations “We look for candidates with knowledge of Scikit-Learn, Tensorflow, Pytorch, and SparkML. Along with these, candidates who aspire to join us must have a strong understanding of both traditional and deep machine learning,” said Sheth. He also said that the product range at Gupshup requires natural language processing, sequence models, transformers and other related technologies. Dos and Don’ts “All the teams at Gupshup are talented individuals with significant knowledge. Therefore, candidates who plan to apply for data science jobs in Gupshup should brush up their ML fundamentals and should be familiar with the most common NLP scenarios,” said Sheth. Sharing past experiences, Gupshup co-founder said they had observed the candidates trying to showcase their knowledge instead of the real hands-on experience. Instead, candidates should be able to explain in-depth details of the work that they have done previously. Further, he said the most common mistake that candidates commit is focusing only on theoretical knowledge and failing to understand all of the finer intricacies of bringing a concept to existence/life. Work Culture “We have a work culture that is driven by innovation and risk-taking. We encourage our engineers to prototype fast and test concepts in the market. We have a huge existing customer base with a rich data bank from which we derive value,” said Sheth. As a result, the team believes that building a use case from inception to production is significantly faster at Gupshup than most competitors. “Since we focus extensively on automation, bringing a concept to life also becomes a fast-paced as well as a result-oriented process,” added Sheth. He said candidates who join the team would get an opportunity to learn quickly across various use cases in the areas of NLP, computer vision, structured data, and more. At Gupshup, the team also said that they have the cutting-edge infrastructure, including GPUs and TPUs, to train the most complicated and complex models. If selected, candidates will get to experiment under the distributed training of deep models over many GPUs and TPUs.
|
In June 2021, the company took hundreds of restaurant outlets online across India with conversation commerce.
|
["AI Hirings"]
|
["data science and manufacturing", "data science hiring india", "data science salary India", "Gupshup", "jobs in bangalore", "rapido data science team structure"]
|
Amit Naik
|
2021-09-14T10:00:00
|
2021
| 792
|
["data science", "scikit-learn", "data science and manufacturing", "machine learning", "jobs in bangalore", "AI", "TensorFlow", "ML", "PyTorch", "Gupshup", "computer vision", "NLP", "analytics", "data science hiring india", "data science salary India", "rapido data science team structure"]
|
["AI", "machine learning", "ML", "NLP", "computer vision", "data science", "analytics", "TensorFlow", "PyTorch", "scikit-learn"]
|
https://analyticsindiamag.com/ai-hiring/data-science-hiring-process-at-gupshup/
| 2
| 10
| 3
| false
| false
| false
|
52,933
|
Top 12 AI/ML Channels On Youtube To Kickstart 2020
|
Right now we are in the middle of one of the greatest transformations of pedagogical practices in the history of mankind. The Internet is to modern man what the printing press was to the Renaissance era. And, with youtube, mastery is just a click away. The increasing popularity of YouTube has seen a sporadic shift of educators, students and experts from conventional pedagogy. Here is a list of YouTube channels dedicated to the field of artificial intelligence (AI): Two Minute Papers 434K subscribers 24,869,570 views A new research paper on AI gets released every other day. It can be a small adjustment in the gradient descent method which improved accuracy by 0.5% or something big as GANs. Whatever be the number, Two Minute Papers ensures that the audience does not miss out on papers of relevance. This channel keeps the subscriber up to date with the latest, most significant advancements in the field of AI. Sentdex 784K subscribers 72,240,692 views The channel’s host Harrison Kinsley conducts interactive coding sessions and walks the audience through every line of code, explaining the reason behind every single trivial error. These videos are a rich source of information for anyone who is serious about a career in data science. Sentdex brings back the fun to coding in a great way. CodeEmporium 15.5K subscribers 829,581 views CodeEmporium covers topics from everything new and interesting in Machine Learning, Deep Learning, Data Science, and Artificial Intelligence. The host with his quick guides to ML aims to build a community of data science geeks. 3Blue1Brown 2.34M subscribers 104,966,150 views 3 Blue 1 Brown does it in one video what otherwise take 10 years of schooling — that is the visualization of the concepts. This channel’s creator has marvelled at conveying complex mathematical topics in the most intuitive ways possible. The series on calculus and neural networks have done wonders for the audience of any level of expertise. The teaching style is exemplary and is second to none. Computerphile 1.53M subscribers 119,475,055 views A sister channel of Numberphile, one of the finest channels on Youtube which hits the nail from the very first second. They produce short videos packed full of concepts communicated through wit and wisdom. The host plays the role of the audience and asks questions that trouble amateurs, thus making it an interactive experience. The videos focus primarily on the science behind how computers work. This knowledge is crucial for learning concepts like computer vision and computational powers of GPU. Lex Fridman 198K subscribers 11,042,700 views Lex Fridman is the Joe Rogan of the machine learning community. This is one of the fastest-growing Youtube channels that cover AI and ML. The host is a deep learning researcher from MIT who was famous for tutorials on computer vision models. In his videos, Fridman interviews giants of the industry from the likes of Schmidhuber to Elon Musk and gives his audience a rich source of content to know and understand the fascinating world of AI. Microsoft Research 132K subscribers 30,780,093 views Microsoft research in AI has been touching many fields ranging from healthcare to economics. They also provide thought leadership to both business and engineering leaders in fields such as machine learning, artificial intelligence, and cloud computing. This channel introduces viewers to fundamental and groundbreaking researches. Henry AI Labs 4.92K subscribers 129,831 views This channel makes videos summarizing popular ideas in deep learning and artificial intelligence. This includes topics such as computer vision, natural language processing, graph embeddings, generative adversarial networks, reinforcement learning, and monthly updates related to the most happening research papers. deeplizard 51.3K subscribers 3,709,455 views The videos focus primarily on the workings of the popular machine learning models from the root level. The viewers get to know about training models, hyperparameter tuning, and other challenges that surface while building an ML pipeline. Brandon Rohrer 57.2K subscribers 3,047,920 views Brandon Rohrer owns one of the greatest interactive tutorial channels on YouTube. He is a data scientist at Facebook and is clearly aware of what he is talking about. He cuts down on the jargon surrounding data science and conveys the concepts with intuitive real-world examples. If you had to watch one channel to learn and know the language of a data scientist, then it should be this one. Center for Brains, Minds and Machines (CBMM) 19.7K subscribers 1,066,693 views CBMM aims to create a new field — the Science and Engineering of Intelligence — by bringing together computer scientists, cognitive scientists, and neuroscientists to work in close collaboration. This new field is dedicated to developing a computationally based understanding of human intelligence and establishing an engineering practise based on that understanding. StatQuest with Josh Starmer 183K subscribers 8,472,531 views Statistics and data analysis are a lot easier than most people think. Because computers do the math for you, the most important thing is to understand the concepts and main ideas, and which are made pretty simple by the host Josh Starmer, who makes learning statistics fun. The videos are short, informative and make a great source for those who are starting out in the field of data science, and also to those who need to have a quick look at concepts before an interview. Analytics India Magazine 21K subscribers 1,724,846 views Analytics India Magazine (AIM) is India’s leading online portal on analytics and related fields. We aim for the promotion and discussion of ideas and thoughts on business analytics from India’s perspective. The website was established with the aim to evangelize analytics in India through healthy discussion and idea dissemination on next-gen analytics within India.
|
Right now we are in the middle of one of the greatest transformations of pedagogical practices in the history of mankind. The Internet is to modern man what the printing press was to the Renaissance era. And, with youtube, mastery is just a click away. The increasing popularity of YouTube has seen a sporadic shift […]
|
["AI Trends"]
|
["lex fridman", "simple ai tutorial", "YouTube"]
|
Ram Sagar
|
2020-01-01T10:00:27
|
2020
| 929
|
["simple ai tutorial", "data science", "artificial intelligence", "machine learning", "AI", "neural network", "ML", "computer vision", "Aim", "deep learning", "analytics", "lex fridman", "YouTube"]
|
["AI", "artificial intelligence", "machine learning", "ML", "deep learning", "neural network", "computer vision", "data science", "analytics", "Aim"]
|
https://analyticsindiamag.com/ai-trends/top-12-ai-ml-channels-on-youtube-to-kickstart-2020/
| 4
| 10
| 2
| false
| true
| true
|
10,063,399
|
Double-edged: Pharma company tweaks its AI model and finds 40k biochemical weapons in 6 hours
|
Researchers at Collaborations Pharmaceuticals were in for a surprise when they tweaked their AI model for drug discovery to look for biochemical weapons. The machine learning algorithm found 40,000 options in just six hours. Collaborations Pharmaceuticals recently published computational machine learning models for toxicity prediction in different areas. The company explored how AI could be used to design toxic molecules and it evolved into a computational proof of concept for making biochemical weapons. The model generated 40,000 molecules that scored within the desired threshold. In the process, the AI designed not only VX but also many other known chemical warfare agents. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic, based on the predicted LD50 values, than publicly known chemical warfare agents. Interestingly, the datasets used for training the AI did not include these nerve agents. The paper, Dual use of artificial-intelligence-powered drug discovery, is a wake-up call for the companies in the ‘AI in drug discovery’ community. Though some domain expertise in chemistry or toxicology is still required to generate toxic substances or biological agents that can cause significant harm, when these fields intersect with machine learning models, all you need is the ability to code and to understand the output of the models.
|
In the process, the AI designed not only VX but also many other known chemical warfare agents.
|
["AI News"]
|
[]
|
Kartik Wali
|
2022-03-23T19:26:28
|
2022
| 215
|
["Go", "machine learning", "TPU", "programming_languages:R", "AI", "programming_languages:Go", "R"]
|
["AI", "machine learning", "TPU", "R", "Go", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-news-updates/double-edged-pharmaceutical-company-tweaks-its-ai-model-and-finds-40k-biochemical-weapons-in-6-hours/
| 3
| 7
| 0
| false
| false
| true
|
10,042,033
|
Why We Don’t See More Robotics Startups
|
In April 2019, San Francisco-based robotics startup Anki shut shop after filing for bankruptcy. In the nine years of its existence, the company had developed popular robotic toys such as Overdrive, Cozmo, and Vector. CEO Boris Sofman blamed the company’s closure on a last-minute financing snafu. Same year, another robotics company, Jibo, went bust. Notably, Time magazine touted Jibo’s eponymous social bot as one of the best innovations of 2017. The inability to raise funding in time led to the unceremonious exit of Jibo as well. Both Anki and Jibo were promising startups. Though Google-owned Schaft, Rethink Robotics, and Mayfield Robotics showed a lot of potential, they too folded eventually. If history is any indication, robotics as a business is difficult to sustain. Robotics startup In 2020, a charming video of robots dancing to the 1962 hit ‘Do You Love Me’ broke the internet. The ensemble cast in the video were from the house of Boston Dynamics. The company is hugely popular for its agile and intelligent service robots. However, Boston Dynamics has suffered major losses in recent years. In the fiscal year, ending March 2020, the company had posted a net loss of $103 million, a 60 percent bump from the year before. Later, automobile giant Hyundai came to the rescue of tBoston Dynamics and picked up a majority stake in the company. So, what explains the sad fate of some of the pioneering companies in the field of robotics? The struggle of robotics firms to keep their head above water as opposed to the thriving AI companies is truly a study in contrast. In the last 12 months, AI-focused startups raised a total of $73.4 billion. In the same time period (until March 11, 2021), robotics startups raised $6.3 billion, a paltry sum in comparison. Hardware development is hard, and robotics even harder. Robot development requires expertise and skills spanning several domains, including software, mechanical, electronics, electromechanical engineering, and complex assembly. Getting machines to perform simple actions like climbing the stairs or moving through a room without running into an obstacle can be quite a daunting task. This is one of the reasons why the majority of robots have been confined to limited, repetitive tasks as in the case of stationary industrial robots, and semi-autonomous robots. The lack of demand for robots outside of a handful of domains such as logistics and hospitals has been cited as another major reason for the slowdown. The shrinking scope of robotics as a sustainable business has forced investors and entrepreneurs to rethink. Moreover, robotics is a capital intensive field. For any venture to be successful, it must address a specific market need. Most of the robotics companies have to spend millions to develop robots for specific use cases. The ROI on the resources spent on R&D is far from optimal, making it difficult to sustain the business. What’s next Despite the looming uncertainty around the field, reports suggest the robotics industry may grow from $76.6 billion in 2020 to $176.8 billion in 2025. However, it may be noted that, unlike AI-focused startups, robotics startups have been largely unsuccessful in entering the Unicorn club. China-based UBtech Robotics, which is arguably world’s most successful robotics company.
|
In April 2019, San Francisco-based robotics startup Anki shut shop after filing for bankruptcy. In the nine years of its existence, the company had developed popular robotic toys such as Overdrive, Cozmo, and Vector. CEO Boris Sofman blamed the company’s closure on a last-minute financing snafu. Same year, another robotics company, Jibo, went bust. Notably, […]
|
["AI Startups"]
|
["Boston Dynamics", "Funding", "Robotics", "valuation"]
|
Shraddha Goled
|
2021-06-21T11:00:00
|
2021
| 533
|
["Go", "Boston Dynamics", "Funding", "API", "unicorn", "funding", "AI", "programming_languages:R", "innovation", "programming_languages:Go", "Robotics", "valuation", "R", "startup"]
|
["AI", "R", "Go", "API", "innovation", "startup", "unicorn", "funding", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-startups/why-we-dont-see-more-robotics-startups/
| 3
| 10
| 4
| false
| false
| false
|
9,280
|
Mathworks India to Host Premier Muti-city Engineering Conference
|
MathWorks, the leading developer of mathematical computing softwares such as MATLAB and Simulink are organising MATLAB EXPO 2016 in India. Products by mathworks are utilized by engineers, scientists and universities around the world to accelerate the pace of discovery, innovation, and development in automotive, aerospace, electronics, financial services, biotech-pharmaceutical, and other industries. ABOUT MATLAB: MATLAB EXPO 2016, one of India’s premier events for research and engineering communities will be hosted by MathWorks India in Bangalore, Pune, and for the first time, Hyderabad. The EXPO features presentations and workshops by MathWorks technical professionals and customers. Over the last six years, this conference has served as a stage for engineers, scientists and researchers to meet, converse and learn about cutting-edge product capabilities in MATLAB and Simulink. KEY PRESENTERS: EXPO will feature key industry leaders including General Motors, Robert Bosch, National Aerospace Laboratories, and Rockwell Collins India Design Center. Additionally, Richard Rovner, Vice President Marketing, MathWorks, will present the keynote, “The Rise of Engineering-Driven Analytics”. He will discuss the flexibility to run analytics, either on massive datasets in IT or cloud infrastructures or as the data are acquired on smart sensors and embedded devices, is enabling organizations in many industries to develop intelligent products, devices, and services that expand the business impact of their data and analytics. For the detailed agenda, visit www.matlabexpo.in. KEY DATES: Bangalore: Thursday, April 21, 2016 Pune: Tuesday, April 26, 2016 Hyderabad: Thursday, April 28, 2016 VENUE: Park Plaza 90/4, Outer Ring Road, Marathahalli Village, Bengaluru Karnataka The Westin, Pune Koregaon Park 36/3-B Koregaon Park Annexe, Mundhwa Road, Ghorpadi Pune, Maharashtra Park Hyatt Road Number 2, Banjara Hills, Hyderabad, Telangana 500034 MATLAB EXPO 2016 aims to address research and engineering communities in Bangalore, Pune and Hyderabad. Spread the word and participate!
|
MathWorks, the leading developer of mathematical computing softwares such as MATLAB and Simulink are organising MATLAB EXPO 2016 in India. Products by mathworks are utilized by engineers, scientists and universities around the world to accelerate the pace of discovery, innovation, and development in automotive, aerospace, electronics, financial services, biotech-pharmaceutical, and other industries. ABOUT MATLAB: MATLAB EXPO 2016, […]
|
["Deep Tech"]
|
["analytics conference"]
|
Apoorva Verma
|
2016-03-09T12:55:48
|
2016
| 292
|
["programming_languages:R", "AI", "RPA", "innovation", "analytics conference", "BERT", "Aim", "llm_models:BERT", "analytics", "GAN", "R"]
|
["AI", "analytics", "Aim", "R", "BERT", "GAN", "RPA", "innovation", "llm_models:BERT", "programming_languages:R"]
|
https://analyticsindiamag.com/deep-tech/mathworks-india-host-premier-multi-city-engineering-conference/
| 3
| 10
| 3
| false
| false
| false
|
10,008,212
|
Why Community Platforms Should be Built On GraphML
|
“Twitter perhaps one of the largest producers of graph-structured data in the world, second only to the Large Hadron Collider!” Graphs are popular with fields like biology, quantum chemistry, and high-energy physics. Social media platforms like Twitter too, are leveraging graph-based ML for their services. Before we get into how social media platforms can benefit from graphs, let’s briefly talk about what separates GraphML from traditional deep learning methodologies. Overview Of GraphML Graphs can be looked at as mathematical abstractions of relations within a complex system. A graph consists of nodes or vertices with pairwise connections (edges). The idea behind these representation learning approaches is to learn a mapping that embeds nodes, or entire graphs — graphs, where the nodes are users and edges are the conversations between users. Deep learning on graphs is also known as geometric deep learning because the goal is to make sure that the geometric relationships in this learned space reflect the structure of the original graph. These learned embeddings can be then used as inputs to a machine learning model. Like convolutional neural networks(CNNs) in computer vision tasks, graphs rely on the way local operations are designed. Like weight sharing in neural networks. If two nodes or users share the same edge or like the same post, then a machine learning model can use the insights as inputs to dish out recommendations. A significant difference compared to classical deep neural networks is that graphs are permutation-invariant, i.e. independent of the order of neighbour nodes. There are no such rules on how to arrange the nodes. Graphs are application dependent. For instance, in node-wise problems, properties of individual nodes, say spammers in a network, are predicted. Whereas, in graph-wise problems, prediction about the entire graph is made. Researchers at Stanford stated that machine learning on graphs is about finding a way to incorporate information about the structure of the graph into the machine learning model. Social media giants like Twitter are leveraging and even researching graph ML to push the boundaries. How Twitter Uses It Twitter interactions can be likened to very large-scale complex graphs where the nodes model users and Tweets, while the edges model the interactions such as replies, Retweets, or favs. Twitter handles hundreds of millions of Tweets and Retweets every day. “This makes Twitter perhaps one of the largest producers of graph-structured data in the world, second perhaps only to the Large Hadron Collider,” says Michael Bronstein, head of graph learning research at Twitter. These millions of retweets and likes translate to hundreds of millions of nodes and billions of edges. Moreover, these applications are time-constrained. Users would like to see real-time trends and customised recommendations. Current research in graph network models, writes Bronstein, only deals with modestly sized graphs, which are inadequate for large-scale settings, both in terms of architecture and training algorithms. The key to successful usage of graph neural networks is to make the right tradeoff between performance, computational complexity, memory footprint, training and inference time. Furthermore, existing literature doesn’t address the dynamic problem. On platforms like Twitter, real-world interactions between people are dynamic by nature, and we witness trends, topics, and interests emerge and fade out all the time. To handle these dynamics, Twitter’s graph takes shape by feeding on a stream of asynchronous events such as new users, their following lists, likes, tweets and retweets. Twitter users generate graphs in different forms. For instance, the following graph represents the social network of the users, and the engagement graph captures how people interact with Tweets, or graphs used by Integrity Data Science team to relate users to devices and IP addresses from which they access the service in order to detect malicious behaviour and violations. Graph learning team at Twitter believes that deep learning on graphs has a lot of untapped potential. For example, the majority of graph neural networks are limited to nodes and edges only, but higher-order structures such as motifs, graphlets, or simplicial complexes are known to be of importance in complex networks. These complex structures offer more expressive power to graph-based models, but as mentioned earlier, there is a tradeoff.
|
“Twitter perhaps one of the largest producers of graph-structured data in the world, second only to the Large Hadron Collider!” Graphs are popular with fields like biology, quantum chemistry, and high-energy physics. Social media platforms like Twitter too, are leveraging graph-based ML for their services. Before we get into how social media platforms can benefit […]
|
["Deep Tech"]
|
["corporate analytics platform", "graph neural networks", "hadoop problems"]
|
Ram Sagar
|
2020-09-24T11:00:46
|
2020
| 687
|
["data science", "Go", "machine learning", "AI", "neural network", "corporate analytics platform", "ML", "computer vision", "RAG", "hadoop problems", "deep learning", "graph neural networks", "R"]
|
["AI", "machine learning", "ML", "deep learning", "neural network", "computer vision", "data science", "RAG", "R", "Go"]
|
https://analyticsindiamag.com/deep-tech/social-media-platforms-graph-machine-learning/
| 3
| 10
| 0
| false
| true
| false
|
31,047
|
SBI Mutual Funds Launches Its First AI-Powered Voice Assistant
|
SBI Mutual Funds this Tuesday launched their first artificial intelligence-powered voice assistant in collaboration with Google. It was developed in partnership with AllinCall Research and Solutions Private Limited, a Mumbai based startup. The official statement shared by the public sector banking and financial services company said that the voice assistant can currently be accessed through any smartphone or device with a Google Assistant. The statement also added that this bot was created with an aim to assist investors with information related to multiple areas, including getting basic product-related information, locating nearest branch, check their KYC status, using SIP calculator, receiving a call back from customer care or retrieve account statements, using a user-friendly voice interface. Ashwani Bhatia, MD and CEO at SBI Funds Management said in a statement, “We as a fund house constantly strive to introduce new services for our investors and technology-enabled services are the order of the day. I am happy to announce the launch of our voice assistant, a unique initiative in the mutual fund industry. The voice assistant acts as another interface for investors to interact with the fund house, get basic information on their investments and do more.” SBI plans that the voice assistant shall be developed in a manner that it can very well help investors with transactions, portfolio valuations and other value-added services at large. Vikas Agnihotri, Country Director for Sales at Google India said, “We are pleased to be working with and look forward to extending more engaging experiences to Google Home and the Assistant.” Last year in December, SBI Mutual Fund had launched a virtual assistant or a chatbot called YUVA, which was built with the aim of adapting to the changing communication style. The handling capacity of the bot is such that till date, it has handled over 2 million queries.
|
SBI Mutual Funds this Tuesday launched their first artificial intelligence-powered voice assistant in collaboration with Google. It was developed in partnership with AllinCall Research and Solutions Private Limited, a Mumbai based startup. The official statement shared by the public sector banking and financial services company said that the voice assistant can currently be accessed through […]
|
["AI News"]
|
["FinTech", "sbi"]
|
Martin F.R.
|
2018-12-04T11:58:39
|
2018
| 302
|
["sbi", "Go", "artificial intelligence", "programming_languages:R", "AI", "programming_languages:Go", "Aim", "FinTech", "R", "startup"]
|
["AI", "artificial intelligence", "Aim", "R", "Go", "startup", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-news-updates/sbi-mutual-funds-launches-its-first-ai-powered-voice-assistant/
| 2
| 8
| 2
| false
| false
| false
|
8,296
|
International Workshop on Big Data Benchmarking in Delhi
|
With the proliferation of big data hardware and software solutions in industry and research, there is a pressing unmet need for benchmarks that can provide objective evaluations of alternative technologies and the approaches to solve a given big data problem. There is a strong tradition of industry standards for computer system and database benchmarks, e.g., from standards groups like the Transaction Processing Performance Council (TPC) and the Standard Performance Evaluation Corporation (SPEC), and Supercomputing community activities such as the TOP500. Without benchmarking, in a rapidly developing technological field, it is difficult to assess the quality and the utility of the new solutions. Are they good enough to meet the fast-changing requirements of the Big Data challenges? Are they scalable and robust? Can they handle, say, high velocity stream data from finance? Information-rich unstructured data such as text or video from multimedia? Complex and large network data from telecommunications? Traditional large-scale industry standard benchmarks only test systems using database sizes up to 100 Terabytes. Benchmarks are thus needed to model real-world processing pipelines that incorporate realistic features of big data applications. Benchmarks can be defined at multiple levels — from micro-benchmarks, for low-level system operations, to application-level benchmarks for testing scenarios based on end-user applications, whose “end-to-end” performance is more directly relevant to end users in a specific application domain. Big data benchmarks can provide objective measures quantifying performance, scalability, elasticity, and price/performance of systems designed to support big data applications, to facilitate evaluation of alternative solutions. They can also characterize the new feature sets, enormous data sizes, and shifting loads of big data applications, and the large-scale and evolving system configurations and heterogeneous technologies of big data platforms. The Seventh International Workshop in Big Data Benchmarking (WBDB 2015) will be held in India Habitat Centre, New Delhi, on December 14-15, 2015. WBDB 2015 will bring together experts and outstanding researchers from international and Indian academia, industry and government administration. Professor Michael J. Franklin, Thomas M. Siebel Professor and Chair of Computer Science Division in University of California at Berkeley, USA, and the Director of the Berkeley Algorithms, Machines, and People Laboratory (the renowned “AMPLab” – the birthplace of Spark Streaming, SparkR, etc.) will deliver the first keynote address at WBDB 2015. The Workshop is jointly organized by San Diego Supercomputer Center, University of California San Diego, Indian Statistical Institute and Public Health Foundation of India. Besides Big Data Benchmarking, this workshop will also focus on Big Data Analytics in Health Systems, Air Quality Management and Agriculture. Papers and posters on cutting-edge themes will be presented by researchers from institutes nationwide such as Indian Institute of Science, Indian Institutes of Technology, Indian Institute of Public Health, as well as from institutions abroad. For registration and further details, visit the workshop website: http://clds.sdsc.edu/wbdb2015.in
|
With the proliferation of big data hardware and software solutions in industry and research, there is a pressing unmet need for benchmarks that can provide objective evaluations of alternative technologies and the approaches to solve a given big data problem. There is a strong tradition of industry standards for computer system and database benchmarks, e.g., […]
|
["Deep Tech"]
|
["Analytics Case Study"]
|
AIM Media House
|
2015-11-21T08:53:52
|
2015
| 461
|
["big data", "Go", "API", "programming_languages:R", "AI", "Scala", "GAN", "ViT", "analytics", "Analytics Case Study", "R"]
|
["AI", "analytics", "R", "Go", "Scala", "API", "big data", "GAN", "ViT", "programming_languages:R"]
|
https://analyticsindiamag.com/deep-tech/international-workshop-on-big-data-benchmarking-in-delhi/
| 3
| 10
| 1
| false
| false
| true
|
10,024,252
|
Google Cloud And Siemens To Cooperate On AI-Based Solutions In Manufacturing
|
Recently, Google and Siemens announced a new cooperation to optimise factory processes and improve productivity on the shop floor. The new cooperation aims to enable the scaled deployment of AI-based solutions for industrial manufacturing. Siemens intends to integrate Google Cloud’s leading data cloud and artificial intelligence and machine learning (AI/ML) technologies with its factory automation solutions to help manufacturers innovate for the future. Deploying AI to the shop floor and integrating it into automation and the network is a complex task, requiring highly specialised expertise and innovative products such as Siemens Industrial Edge. The goal of the cooperation between Google Cloud and Siemens is to make the deployment of AI in connection with the Industrial Edge—and its management at scale— easier, empowering employees as they work on the plant floor, automating mundane tasks, and improving overall quality. By combining Google Cloud’s data cloud and AI/ML capabilities with Siemens’ Digital Industries Factory Automation portfolio, manufacturers will be able to harmonise their factory data, run cloud-based AI/ML models on top of that data, and deploy algorithms at the network edge. This enables applications such as visual inspection of products or predicting the wear-and-tear of machines on the assembly line. Axel Lorenz, VP of Control at Factory Automation of Siemens Digital Industries said, “The potential for artificial intelligence to radically transform the plant floor is far from being exhausted. Many manufacturers are still stuck in AI ‘pilot projects’ today – we want to change that.” Lorenz added, “Combining AI/ML technology from Google Cloud with Siemens’ solutions for Industrial Edge and industrial operation will be a game-changer for the manufacturing industry.” Dominik Wee, Managing Director Manufacturing and Industrial at Google Cloud commenting on the same said, “Siemens is a leader in advancing industrial automation and software, and Google Cloud is a leader in data analytics and AI/ML. This cooperation will combine the best of both worlds and bring AI/ML to the manufacturing industry at scale. By simplifying the deployment of AI in industrial use cases, we’re helping employees augment their critical work on the shop floor.”
|
By combining Google Cloud’s data cloud and AI/ML capabilities with Siemens’ Digital Industries Factory Automation portfolio, manufacturers will be able to harmonize their factory data, run cloud-based AI/ML models on top of that data, and deploy algorithms at the network edge.
|
["AI News"]
|
["AI for manufacturing", "Google", "Google Cloud", "Siemens"]
|
Ambika Choudhury
|
2021-04-19T17:40:52
|
2021
| 342
|
["Go", "Google Cloud", "artificial intelligence", "machine learning", "AI", "ML", "Git", "Siemens", "Aim", "AI for manufacturing", "analytics", "Google", "ViT", "R"]
|
["AI", "artificial intelligence", "machine learning", "ML", "analytics", "Aim", "R", "Go", "Git", "ViT"]
|
https://analyticsindiamag.com/ai-news-updates/google-cloud-and-siemens-to-cooperate-on-ai-based-solutions-in-manufacturing/
| 3
| 10
| 1
| false
| false
| false
|
10,045,189
|
NVIDIA’s AI Startup Acceleration Platform Surpasses 8,500 Mark
|
Launched in 2016, NVIDIA Inception, an acceleration platform for AI startups has now surpassed 8,500 members. It now accounts for about two-thirds of the total number of AI startups worldwide, as per Pitchbook. NVIDIA Inception is one of the largest AI startup ecosystems in the world with cumulative funding of $60 billion and has members in 90 countries. Since the launch, the platform has grown more than tenfold. The member count increased 17 percent just in the first half of 2021. More than 3,000 AI startups have joined NVIDIA Inception post 2020. In the country wise breakup, the United States accounts for the highest number of AI startups (representing 27 percent), and also the largest amount of cumulative funding ($27 billion), under the NVIDIA program. As much as 42 percent of these US-based startups are operating out of California. China follows the US, both in terms of funding and company stage. 12 percent of NVIDIA Inception members are based on China. India is ranked third (7 percent) and the United Kingdom comes fourth (6 percent). The four countries together account for over half of all the startups on NVIDIA Inception’s platform. Startups from Germany, Russia, France, Sweden, Netherlands, Korea and Japan are also incubated under the NVIDIA platform. In terms of sectors, healthcare, IT services, media and entertainment, intelligent video analytics (IVA), and robotics make up the top five under NVIDIA Inception. AI startups in healthcare account for 16 percent of Inception members, followed by IT at 15 percent. AI startups in IVA make up 8 percent, with M&E and robotics AI startups tied at 7 percent.
|
Since the launch, the platform has grown more than tenfold.
|
["AI News"]
|
["Startups"]
|
Shraddha Goled
|
2021-08-03T17:41:48
|
2021
| 267
|
["funding", "programming_languages:R", "AI", "RPA", "analytics", "ai_applications:robotics", "Startups", "R", "startup"]
|
["AI", "analytics", "R", "RPA", "startup", "funding", "programming_languages:R", "ai_applications:robotics"]
|
https://analyticsindiamag.com/ai-news-updates/nvidias-ai-startup-acceleration-platform-surpasses-8500-mark/
| 3
| 8
| 1
| false
| false
| false
|
10,078,041
|
Researchers Debate: Is Neuroscience the Foundation of Future AGI Systems?
|
The paper, ‘Toward Next-Generation Artificial Intelligence: Catalyzing the NeuroAI Revolution’, co-authored by 27 highly prominent AI researchers and neuroscientists, proposes a roadmap for the path towards building an Artificial General Intelligence (AGI). An AGI system, unlike the Narrow Intelligence systems (ANI) which are designed to perform specific tasks like play chess—or participate in a game show like Jeopardy!—will be exposed to unpredictable environments, one that the system is not particularly trained on, and asked to navigate through the same. For the authors, such a human-like artificial intelligence system is achievable. They categorise what is ‘human-like’ as systems that excel in “vision, reward-based learning, interacting with the physical world, and language”. Doubling down on NeuroAI research However, some of the key advances in AI research currently, such as convolutional Artificial Neural Networks (ANNs) and Reinforcement learning (RL), have been limited as they are built upon decade-old findings in neuroscience. According to the authors, the latest developments in the neuroscience field offer a broader scope for NeuroAI research on the path to AGI. The foreseeable step right now is to make systems that consist of a few basic ingredients of intelligence—namely, “adaptability, flexibility, and the ability to make general inferences from sparse observations”. These ingredients are already available in some form in most basic sensorimotor circuits. The paper argues that the neuroscience of embodied interaction with the world observed in all animals can be monumental in bringing the dream of a ‘human-like’ AI much closer. The idea takes inspiration from the evolutionary capabilities of animals to adapt to different environments. If the neural-level circuits of animals are broken down into their constituents, an AI system capable of the same can be emulated. Neuroscience research in AI development: Is it needed? Since its release, the paper has renewed the discussion surrounding the role of neuroscience research in the development of AI systems. There are disagreements over whether neuroscience has had a tangible impact on AI modelling. What do the critics say? In response to the paper’s claim that neuroscience should continue to drive AI progress, DeepMind research scientist David Pfau said that neuroscience never drove AI in the first place, further adding that, “there’s a difference between drawing some high level inspiration from classic work and directly drawing on the latest research”. Sam Gershman, Professor in the Department of Psychology and Center for Brain Science at Harvard University, also adds to the discussion by expressing doubt that neuroscience research can directly deliver algorithms that can be plugged into the system. He writes, “new engineering ideas come from thinking about the structure of problems, not reading the tea leaves of biology.” Gershman also poses an interesting question that pivots the debate to a specific direction: Consider the counterfactual world where engineers knew nothing about neuroscience. Do you think that we wouldn’t have convolutional networks or reinforcement learning? The question propels us to think if the two fields driven by different levels of curiosity—conceptual and empirical—need merging. Adding to the list of critics, Luigi Alcerbi, Assistant Professor of Machine & Human Intelligence at the University of Helsinki, provides a fairly level-headed take over the current discourse, saying: “The importance of neuroscience on AI/ML development in the past is hard to quantify, but it’s fairly uncontroversial to say that some inspiration and ideas did come from neuro—although much less than one would expect or might like to admit. In the present, it’s close to zero.” Alcerbi concurs with Pfau’s comment, adding that the influences neuroscience has had are all limited to high-level analogies used to model AI systems and not towards detailed biological implementation. In a similar vein, Alberto Romero, Analyst at Cambrian AI, explains that artificial neurons are extremely simple and are based on the 80-year old model of the neuron, compared to the current sophisticated models of the human brain. Experts on Neuroscience’s Potential for AI Research Against the criticisms, several other researchers have made claims for how neuroscience has shaped, or can shape, the developments in AI/ML systems. Yann LeCun, Chief AI Scientist at Meta and one of the authors of the paper, writes this in response to such claims: You are wrong.Neuroscience greatly influenced me (there is a direct line from Hubel & Wiesel to ConvNets) and Geoff Hinton.And the whole idea of neural nets and learning by adjusting synaptic weights *clearly* comes from neuroscience.— Yann LeCun (@ylecun) October 22, 2022 Similarly, the doubts over the impact of neuroscience on the field of AI research were also addressed by Surya Ganguli, Research Scientist at Meta and one of the contributors of the paper. Ganguli directs readers’ attention to an article he had written in 2018 wherein concrete examples of productive collaboration between biological and artificial systems in the past 60 years were provided. Gary Marcus, Professor Emeritus at NYU, also shared some key ideas in neuroscience yet to be embraced by Machine Learning models that must be included in the paper: ideas from neurosci yet to be fully embraced in ML:– massive amount of structure (eg Van Essen diagram) – power of dendrites (@mattlark/@YiotaPoirazi etc)– variety of neurons (@AllenInstitute)– areawise specialization (@Nancy_Kanwisher)– intrinsic cues & development (Rakic)— Gary Marcus (@GaryMarcus) October 23, 2022 Final Thoughts Overall, there hasn’t yet been a serious rebuttal by the critics to the examples outlined above. Although, Pfau did respond to LeCun’s comment suggesting that neuroscience studies on detailed structures of a neuron or a cell do not directly correlate/answer to the problems that AI researchers work upon. The discussion so far leads us to believe that it is not so much whether neuroscience has been influential or not, but to what degree can latest neuroscience research help in solving some key engineering problems faced by the current AI systems in their pursuit of a general AI. However, what we know for sure is that neuroscience and AI share the same foundation—since AGI is dreamt on the fascination for building ‘human-like’ intelligent systems—until they reach a point of divergence, and this point of divergence is currently unknown. As long as the hope for AGI is alive, neuroscience will be a leverage AI research will hold onto to establish the foundation of the future models.
|
A whitepaper released recently argues for going all-in on the latest developments in neuroscience to spearhead research for the next generation of Artificial Intelligence (AI).
|
["AI Features"]
|
["AGI", "AI Research", "intelligence", "Machine Learning", "Meta", "neuroscience"]
|
Ayush Jain
|
2022-10-26T17:00:00
|
2022
| 1,027
|
["Go", "Meta", "artificial intelligence", "machine learning", "AI", "neural network", "ML", "Machine Learning", "AI Research", "intelligence", "RAG", "BERT", "Aim", "AGI", "neuroscience", "R"]
|
["AI", "artificial intelligence", "machine learning", "ML", "neural network", "Aim", "RAG", "R", "Go", "BERT"]
|
https://analyticsindiamag.com/ai-features/researchers-debate-is-neuroscience-the-foundation-of-future-agi-systems/
| 3
| 10
| 0
| false
| true
| true
|
26,399
|
Understanding DensePose, Facebook’s New Tool That Revolutionises Human Body Estimation
|
In June 2018, social media giant Facebook open-sourced DensePose, a tool which was internally built by their artificial intelligence team. The tool has the ability to extract a 3D mesh model of a human body from two-dimensional RGB images. Facebook is also releasing the underlying code and dataset that DensePose was trained upon. The training set is called DensePose COCO. It will prove to be an invaluable tool for many computer graphics and computer vision researchers as the dataset contains image-to-surface correspondences annotated on 50,000 persons from the COCO dataset. This particular tool can have a great impact on researchers and engineers working on scanners and 3D printing applications. The core team behind this project includes Rıza Alp Güler from INRIA, CentraleSupélec, Natalia Neverov and Iasonas Kokkino from FAIR. The research team states, “We involve human annotators to establish dense correspondences from 2D images to surface-based representations of the human body… If done naively, this would require by manipulating a surface through rotations — which can be frustratingly inefficient. Instead, we construct a two-stage annotation pipeline to efficiently gather annotations for image-to-surface correspondence.” Getting Complete Surface-Based Image Interpretation The problem solved by DensePose is two-fold and deals with Narrow problem of human understanding Multi-view or single-view registration The tool can be used to extract a 3D mesh model of a human body from 2D RGB images, but with several changes, it might be possible to use the same algorithms for different applications. The researchers state that human understanding research primarily aims at a very small set of features like joints, elbows and humans. The researchers also state that these kinds of approaches may only be good for limited applications like action or gesture recognition. This approach also reduces image interpretation. The team states that because of these limitations they wanted to improve upon the present algorithms, “We wanted to go further. Imagine trying new clothes on via a photo, or putting costumes on your friend’s photos. For these tasks, a more complete, surface-based image interpretation is required.” The system thus tries to understand human images in terms of surface-based models. Finding relations and correspondence between surfaces can be very useful. The researchers work shows that we can calculate dense correspondences between 2D RGB images and 3D surface models for the human body. The researchers take into account more than 5,000 nodes rather than only 10-20 joints. The speed up thus acquired can be used in the applications of virtual and augmented reality. The open sourced system can handle thousands of human images simultaneously on a single GPU. As stated above the researchers have released DensePose COCO to compliment this. Image-to-surface correspondences form the ground truth for training. The DensePose Network The researcher claim to have built a very novel and deep architecture for the task. They also make a claim that due to Caffe the neural network is as fast as Mask-RCNN. As the researchers say, “We build on FAIR’s Detectron system and extend it to incorporate dense pose estimation capabilities. As in Detectron’s Mask-RCNN system, we use Region-of-Interest Pooling followed by fully-convolutional processing. We augment the network with three output channels, trained to deliver an assignment of pixels to parts, and U-V coordinates”. The strategy of the research project is to find dense correspondence by dividing the surface into many parts. The system determines for every pixel: which surface part it belongs to, where on the 2D parameterization of the part it corresponds to. Architecture of DensePose network The system can be trained in a supervised fashion. But it is claimed that better results can be achieved by “inpainting” the knowledge learned from supervision into non annotated regions of the human body. This is achieved by training a CNN-based “teacher network” to reconstruct ground truth values given the images. Facebook says, “Detectron is a high-performance codebase for object detection, covering both bounding box and object instance segmentation outputs.” The team has designed the tool to be flexible for rapid implementation and evaluation. Detectron is used by the FAIR team on numerous state-of-the-art research projects. The commitment to open sourcing state of the art technologies is commendable. The DensePose GitHub repository also hosts trained models that can be readily used by researchers. The researchers think that making DensePose open and accessible to all will make a huge difference. And they are right because there are not many datasets and tools available in the field of multi-view registrations and 3D model building. The data is very important and will be a treasure for many computer vision researchers in the field. Facebook feels that the release of the dataset will also bring engineers and researchers working in the fields of Computer Vision, Computer Graphics and Augmented Reality. Some of the applications mentioned by FB for such a technology are “creating whole-body filters or learning new dance moves from your cell phone.”
|
In June 2018, social media giant Facebook open-sourced DensePose, a tool which was internally built by their artificial intelligence team. The tool has the ability to extract a 3D mesh model of a human body from two-dimensional RGB images. Facebook is also releasing the underlying code and dataset that DensePose was trained upon. The training […]
|
[]
|
["augmented reality", "Facebook", "Facebook AI", "Object Detection", "virtual reality"]
|
Abhijeet Katte
|
2018-07-12T11:16:52
|
2018
| 809
|
["Go", "artificial intelligence", "TPU", "Facebook AI", "AI", "neural network", "augmented reality", "virtual reality", "computer vision", "Aim", "object detection", "Object Detection", "Rust", "Facebook", "R"]
|
["AI", "artificial intelligence", "neural network", "computer vision", "Aim", "object detection", "TPU", "R", "Go", "Rust"]
|
https://analyticsindiamag.com/ai-features/understanding-densepose-facebooks-new-tool-that-revolutionises-human-body-estimation/
| 3
| 10
| 0
| false
| false
| true
|
10,172,552
|
TSMC Accelerates Second Arizona Chip Plant Construction
|
Taiwan Semiconductor Manufacturing Company (TSMC) has begun construction of its second plant in Arizona ahead of schedule, with installation slated for completion in the third quarter of 2026. The facility, known as P2 (second plant), will utilise a three-nanometre (3 nm) process and is expected to begin production in 2027. The move addresses growing customer demand for US-based manufacturing and aims to respond to trade measures such as US tariffs, The Commercial Times reported. Equipment installation is expected to begin as early as September 2026, with full production targeted for the following year. Construction on P2 officially commenced in April 2025. The two-year build time reflects a compressed schedule, with the project running at full speed to meet customer needs. Equipment vendors noted that post-construction, internal factory adjustments typically require around two years. The project is also expected to benefit Taiwanese engineering companies, such as Han Tang and Fanxuan, which previously worked on TSMC’s first Arizona facility (P1). Supply chain participants indicated these firms are expected to improve long-term profits due to prior experience. Meanwhile, suppliers of speciality gases and chemicals are set to receive increased orders from North America, with companies such as Shangpin and Shengyi attracting positive attention from market analysts. Despite these developments, Taiwan remains a central part of TSMC’s operations. Advanced packaging, including Chip-on-Wafer-on-Substrate (CoWoS), will continue to be handled in Taiwan. Although TSMC has announced plans for two advanced packaging facilities in the US, construction of the first, AP1, is not expected to begin before the third quarter of 2026. Industry sources told The Commercial Times that most of TSMC’s US plant developments are unlikely to reach full capacity before 2029. In the meantime, prices for US-produced wafers are expected to rise by over 10%, while global wafer prices are anticipated to increase by 3% to 5% next year, driven by demand for AI and increasing construction costs. TSMC’s investment in its home market continues. Nine new factories and 11 production lines are currently under development in Taiwan. Among these, the Kaohsiung F22 plant for 2-nm technology is scheduled for relocation in Q3 2025, with a third facility expected to be completed by Q1 2026.
|
Regardless, advanced packaging, including Chip-on-Wafer-on-Substrate, will continue to be handled in Taiwan.
|
["AI News"]
|
["AI Chip Wars", "Chip Manufacturing", "Taiwan", "tsmc"]
|
Sanjana Gupta
|
2025-06-30T11:32:53
|
2025
| 360
|
["Taiwan", "programming_languages:R", "AI", "AI Chip Wars", "Aim", "tsmc", "R", "Chip Manufacturing"]
|
["AI", "Aim", "R", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-news-updates/tsmc-accelerates-second-arizona-chip-plant-construction/
| 3
| 4
| 4
| false
| false
| false
|
10,086,453
|
Is Age Discrimination in Tech for Real?
|
When the Robert De Niro and Anne Hathaway starrer ‘The Intern’ was released, it not only showed the unorthodox chemistry of the leads but also touched upon the theme of age discrimination in the workplace. Although a work of fiction, it bears an uncanny resemblance to actual events in the IT sector, and may not be purely coincidental, after all. Millennials and Gen Z comprise 68 to 70% and 18 to 20%of the workforce, respectively. PayScale reports that top tech companies have the median employee age at three industry leaders—Facebook, LinkedIn, and SpaceX—under 29. Only three businesses—IBM, Oracle, and HP—have a median employee age of over 33. Dispelling the Myths of Ageism in Indian Tech Indian ITs are not far behind when it comes to age bias. As per a research by AIM, in India, major tech firms have substantially fewer senior folks beyond 50 years. For instance, both Infosys and TCS have around 50% employees between the age bracket of 20 and 35 years, 40% between 35 and 50 years, and a mere 10% above 50 years. On the other hand, IBM has 45% between 20 and 35 years, 30% between 35 and 50 years, and 25% above 50 years. In their latest yearly income report, IT giants like Tech Mahindra and TCS saw a loss of over 6,000 and 2,200 employees, respectively. The companies are still focused on hiring freshers and are expected to recruit a total of 1.57 lakh new graduates by the end of FY23. In 2021, Infosys and TCS conducted campus hiring and onboarded 61,000 freshers. Infosys plans to hire more than 50,000 freshers in FY 2023, while TCS aims to recruit over 40,000 individuals. In retrospect, about twelve years ago, Infosys had to face a few lawsuits regarding age discrimination from applicants over 50. Interestingly, it also discriminates against consultants of Indian origin, which is undoubtedly ironic. “A survey by JobBuzz found that 33 percent of employees in India face or have faced age-based biases at their workplace. Despite legal protections, age discrimination can still occur in the workplace in various forms, such as not hiring older workers, denying promotions or benefits to older employees, or forcing early retirement. While some countries have put strong labour laws against it, the Indian ecosystem is still evolving, and the timing is just about right to bring in law against age-based employment discrimination,” Aditya Malik, founder of ValueMatrix.ai told AIM. But unlike the US, India does not have a law against age discrimination as yet. New Tech, Age-Old Problem Last month, IBM relocated 80 AIX employees to India in the third quarter of 2022 amid reports that the “redeployed” employees were older and senior, sparking conversations about age discrimination in tech. The company has faced similar allegations in the past, including a 2018 lawsuit claiming IBM fired 20,000 ‘dinobabies’ over 40 to make space for a younger workforce. IBM has also recently settled a lawsuit with the family of Jorgen Lohnn, an employee who died by suicide after being fired. Despite denials from the company, the case highlights a broader problem of age discrimination in the tech industry. Not just IBM, several other tech giants too have been accused of the same. In 2019, Google agreed to pay $11 million to end a class-action lawsuit brought by 227 individuals over 40 who alleged age discrimination during the hiring process. The lawsuit was filed by software engineer Cheryl Fillekes, who was highly qualified but repeatedly rejected by Google between 2007-2014, starting when she was 47. According to AIM Recruits, most companies do not have an age policy while recruiting. But whenever they refuse to hire elderly folks it is because the specific job might be physically or mentally strenuous. Senior employees are laid off to cut expenses as they are generally highly paid. Echoing similar sentiments, Malik said that senior employees might face a greater risk of being laid off due to factors such as higher compensation and benefits and the perception that they are less productive and may not be as adaptable to evolving technologies. Many tech companies have come under fire for their preference towards younger generations over elders. From Facebook’s Mark Zuckerberg’s controversial comment in 2007, stating that “young people are just smarter“, to the lawsuit filed by the US Equal Employment Opportunity Commission (EEOC) against IBM for the alleged age discrimination, ageism in the tech ecosystem is a topic that continues to make headlines. But why does this preference exist? It stems from the perception that older generations are less flexible, unwilling to adapt to new technologies and are expensive. According to a study by Professor Andrea Rosales of the Universitat Oberta de Catalunya, ageist attitudes are present in the tech ecosystem worldwide. While a large majority of companies focus on inclusivity and diversity, they often overlook the experience that senior folks can bring to the table. In an industry where there is only a mere 10% of senior executives in IT and tech companies, it becomes important for companies to reassess the situation and support senior talent.
|
The tech industry faces a widespread problem of age bias with companies accused of preferring younger generations due to the perception that older workers are not as adaptable to new technologies
|
["AI Features"]
|
["Facebook", "HCL Technology", "IBM", "Meta", "Oracle", "Twitter (X)"]
|
Shritama Saha
|
2023-02-03T11:00:00
|
2023
| 844
|
["Go", "Meta", "AWS", "AI", "cloud_platforms:AWS", "programming_languages:R", "Oracle", "BERT", "Aim", "llm_models:BERT", "ViT", "IBM", "HCL Technology", "Facebook", "Twitter (X)", "R"]
|
["AI", "Aim", "AWS", "R", "Go", "BERT", "ViT", "llm_models:BERT", "cloud_platforms:AWS", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-features/is-age-discrimination-in-tech-for-real/
| 3
| 10
| 2
| false
| false
| true
|
10,130,147
|
Mistral AI Unveils Mistral Large 2, Beats Llama 3.1 on Code and Math
|
A day after Meta released Llama 3.1, Mistral AI has announced Mistral Large 2, the latest generation of its flagship model, offering substantial improvements in code generation, mathematics, and multilingual support. The model introduces advanced function-calling capabilities and is available on la Plateforme. https://twitter.com/_philschmid/status/1816134218839384113 With a 128k context window and support for dozens of languages, including French, German, Spanish, and Chinese, Mistral Large 2 aims to cater to diverse linguistic needs. It also supports 80+ coding languages, such as Python, Java, and C++. The model is designed for single-node inference and long-context applications, boasting 123 billion parameters. Mistral Large 2 is released under the Mistral Research License for research and non-commercial use. It achieves 84.0% accuracy on the MMLU benchmark, setting a new standard for performance and cost efficiency in open models. In code generation and reasoning, it competes with leading models like GPT-4o and Llama 3. The model’s training focused on reducing hallucinations and ensuring accurate outputs, significantly enhancing its reasoning and problem-solving skills. Mistral Large 2 is trained to acknowledge its limitations in providing solutions, reflecting its commitment to accuracy. Improvements in instruction-following and conversational capabilities are evident, with the model excelling in benchmarks such as MT-Bench, Wild Bench, and Arena Hard. Mistral AI emphasizes concise responses, vital for business applications. Mistral Large 2’s multilingual proficiency includes languages like Russian, Japanese, and Arabic, performing strongly on the multilingual MMLU benchmark. It also features enhanced function calling skills, making it suitable for complex business applications. Users can access Mistral Large 2 via la Plateforme under the name mistral-large-2407. Mistral AI is consolidating its offerings, including general-purpose models Mistral Nemo and Mistral Large, and specialist models Codestral and Embed. Fine-tuning capabilities are now extended to these models. The model is available through partnerships with Google Cloud Platform, Azure AI Studio, Amazon Bedrock, and IBM watsonx.ai. This expansion aims to bring Mistral AI’s advanced models to a global audience, enhancing accessibility and application development. Mistral Large 2 is the fourth model from the company in the past week, following the release of MathΣtral, a specialized 7B model designed for advanced mathematical reasoning and scientific exploration. The company also released Codestral Mamba 7B, based on the advanced Mamba 2 architecture, which is trained with a context length of 256k tokens and built for code generation tasks for developers worldwide. Additionally, Mistral AI introduced Mistral NeMo, a 12-billion parameter model with a 128k token context length, developed in partnership with NVIDIA.
|
It also supports 80+ coding languages, such as Python, Java, and C++.
|
["AI News"]
|
["mistral ai"]
|
Siddharth Jindal
|
2024-07-24T21:25:50
|
2024
| 408
|
["Go", "mistral ai", "TPU", "AI", "GPT-4o", "ML", "R", "Python", "Aim", "Mistral Large 2", "Azure"]
|
["AI", "ML", "GPT-4o", "Mistral Large 2", "Aim", "Azure", "TPU", "Python", "R", "Go"]
|
https://analyticsindiamag.com/ai-news-updates/mistral-ai-unveils-mistral-large-2-beats-llama-3-1-on-code-and-math/
| 4
| 10
| 2
| true
| false
| false
|
10,047,435
|
How Google’s Switch Transformer Started An Ethical Debate
|
OpenAI’s GPT 3 has more or less taken over the tech world regarding language models, but earlier this year, Google introduced its NLP model Switch Transformers. Along with improved parameters, this model was supplemented by an ethics debate and job firings. In February, Google’s Ethical Team leader Margaret Mitchell was fired from Google just a few months after her co-leader Timnit Gebru exited the company. This is supposedly over a paper about the real-world dangers of large language models that they co-wrote. Considering that this move was right around the corner with the release of the Switch Transformer, the situation gave rise to a significant ethics debate. Google’s ethics in the AI research unit has been under scrutiny since December’s dismissal of Gebru. Gebru’s exit prompted thousands of Google workers to protest. How large is too large? OpenAI’s GPT-3 is based on 175 billion computational parameters, AI21 Labs’ Jurassic-1 Jumbo on 178 billion parameters, and Google’s Switch Transformer on 1.6 trillion parameters. So, language models are getting bigger, but they don’t seem to be becoming more ethical proportionately. Large language models parrot human-like language based on their training through data available on the internet – leaving no doubt that their predictions will unpin Google searches, Wikipedia, or forum discussions. While there have been breakthroughs in text analysis, an increasing number of researchers, ethicists, and linguists are not satisfied with these tools, calling them overused and under-vetted. Moreover, the underlying biases and possible harms haven’t been fully reckoned with. For instance, when GPT 3 was asked to complete a sentence containing the word “Muslims,” more than 60% of cases introducing words like bomb, murder, assault, and terrorism. These models effectively mirror the internet, perpetuating stereotypes, hate speech, misinformation, and toxic narratives. Thousands of developers globally use GPT 3 to build their applications, generate emails and create AI-based text startups. With these tools reaching mass audiences, like Gmail’s email autocomplete and its two billion users, these tools are way too powerful to be left unchecked. Training Data The entirety of English-language Wikipedia makes up just 0.6% of GPT -3’s training data. Instead, large models ingest everything available on the internet. All of this data is created by humans and is accompanied by racism, sexism, homophobia, and other toxic content. Online forums include white supremacist threads, gendered binaries, inevitably making the language model associate positions of power with men over women. While some models are more biased than others, no model is free of it. The toxic and partial datasets inevitably are amplified by the models. Another important aspect is the distribution of information found on the internet. The data contributed online is mainly young users from developed countries. For instance, recent surveys on Wikipedians found that only 8.8 — 15% are women, or Pew Internet Research’s 2016 survey found that 67% of Reddit users in the US are men, and 64% are between ages 18-29. Now, with models like GPT-2 that are sourced hugely from Reddit, this overrepresentation is bound to create huge biases and marginalisation. Given the size of the training data, the lack of prominence of underrepresented groups establishes a feedback loop that lessens the impact of data from underrepresented populations. Aligning with Human Objectives OpenAI and Stanford’s research revealed that open source projects are already attempting to recreate GPT-3 with an upper hand of six-nine months to set responsible NLP norms. “Participants discussed the need to align model objectives with human values better,” the paper stated. “Aligning human and model objectives was seen to be especially important for ’embodied’ AI agents which learn through active interaction with their environment.” Thus, the focus needs to turn to define constraints and criteria that counter human bias instead of merely pulling data that reflects the world as it’s represented on the web.
|
Google’s ethics in the AI research unit has been under scrutiny since December’s dismissal of Gebru.
|
["Global Tech"]
|
["AI bias", "Generative Pre-Trained Transformer", "Margaret Mitchell"]
|
Avi Gopani
|
2021-09-01T14:00:00
|
2021
| 630
|
["Go", "startup", "OpenAI", "AI", "Transformers", "NLP", "GPT", "ViT", "Generative Pre-Trained Transformer", "Margaret Mitchell", "AI research", "R", "AI bias"]
|
["AI", "NLP", "OpenAI", "Transformers", "R", "Go", "GPT", "ViT", "startup", "AI research"]
|
https://analyticsindiamag.com/global-tech/how-googles-switch-transformer-started-an-ethical-debate/
| 3
| 10
| 1
| false
| false
| true
|
10,079,496
|
Estimating Cost to Set Up Semiconductor Fabrication in India?
|
Setting up a semiconductor fab is easier said than done. There are a lot of aspects to look into. This includes, sourcing lithography tools from global manufacturers, having suitable infrastructure such as a vast area of land, copious amounts of water, water recycling system, electricity lines, and labour power, among other things. Analytics India Magazine spoke to Arun Mampazhy, a veteran semiconductor analyst, to gauge the cost estimate of setting up a fabrication plant. At the outset, Mampazhy addressed the government scheme surrounding semiconductor operations in India saying, “The bigger picture, if you look at it [the policy], covers a variety of spectrum”. To begin with, a compound semiconductor or silicon photonics fabrication, for instance, is a smaller investment. A compound semiconductor fabrication will typically consist of three phases—growing wafers, making chips, and packaging—and would total up to $40 million. RuttonSha International Rectifier Ltd, a big player in the power semiconductor industry, is one of the applicants for a compound semiconductor plant. However, more than 80 percent of the world’s semiconductors are still made of silicon substrate. Hence, silicon fabrication becomes a crucial area to discuss, with regard to the government’s semiconductor mission. The cost of setting up a silicon fabrication plant is hugely contingent on the technology node the buyers will aim for. This is because you need lithography equipment for manufacturing semiconductor chips, and the kind of tool used decides the range of technology nodes that you can work with. Currently, there are three applications in the race for silicon fabrication. They are targeting the following technology node: (1) IGSS Ventures, which is looking to set up a fabrication unit in Tamil Nadu, is aiming at the range of 28nm, 45nm, and 65nm technology. (2) ISMC has proposed a fabrication unit designing 65nm technology. (3) Lastly, the Foxconn-Vedanta joint venture will be looking at a 12-inch (300mm) wafer carrying 28nm technology. It is important to set the background straight with what the applicants are looking for to determine the cost of the lithography tool that will be needed for the process. Mampazhy said, “The industry typically uses a 200mm wafer for more than 90nm process, whereas less than 90nm process generally requires a 300mm wafer.” Considering the requirements of the buyers, we are looking at a process technology etched on a 300mm wafer size. But, within the 300mm wafer size as well, there is a cost bifurcation. Up to 65nm or 55nm technology can be made with 193 Argon Flouride (ArF) Laser, which will cost about $40 million a piece. And if we go further below to the 45nm, 28nm all the way up to 16nm range, a 93-nanometer immersion will be required which will cost about $100 million a piece. Mampazhy added that the current policy requires the manufacturers to build a 40,000 per month wafer capacity. To get that kind of production running, he said, you need about 15 to 20 of these lithography tools. This means that the lithography tool you use impacts the overall cost by a huge difference. According to Mampazhy, 65nm to 55nm technology is currently the sweet spot, since you could fully utilise the resources with the best technology available. Hence, 15-20 of 193 ArF lasers would take the cost tally up to $2 billion. One of the ways the cost could be controlled is by using refurbished equipment. But, according to Mampazhy, “The government of India is not supportive of this, because they’re afraid that India will become a dumping ground of old equipment.” Besides, there could also be instances where the applicant might quote an overpriced number for the refurbished tool, and get the subsidy for the same. The $2-billion figure is however just the equipment cost. Adding to that, the infrastructural cost of setting up a fab will be about $200 million to $300 million. Plus, there will also be a technology transfer fee levied by the companies issuing the lithographic tools to the fabrication plants. The 65nm-55nm range will have a licensing fee of about $300million to $400million, and as we go to the narrower range, the transfer cost increases to about $1 billion for a 28nm range and more if we go further down. Together, the three components, Mampazhy said, will add up to $3 billion for the 65nm range. Likewise, we can estimate the total cost at the lower end of the spectrum. Govt subsidiary to the rescue In September 2022, the Ministry of Electronics & Information Technology (MeitY), modified the existing semiconductor policy. The new programme had additional incentives to attract investments from companies/consortia to create semiconductor and display fabrication ecosystem in India. These incentives include an outlay of ₹76,000 crores ($10 billion), where the government will provide the eligible applicants 50% of financial support of the project cost on a pari-passu basis. The incentive package will also apply to setting up compound semiconductors/silicon photonics/sensors (including MEMS) fabs/discrete semiconductor fabs and semiconductor ATMP/OSAT units. The government intends to establish at least 20 such units under this scheme. The announcement of the scheme has since sparked interest of quite a few buyers, who wish to avail the scheme. Of late, Reliance and HCL bid to acquire 26-51% stake in ISMC Analog after their proposal to establish a semiconductor wafer fabrication facility in Mysore. Beyond the central government subsidy, there are also several state-sponsored schemes pushing India’s semiconductor mission ahead. Gujarat government, for example, proposed additional assistance on the total capital expenditure of selected proposals. The incentives will leave the investors with much lesser investment than they initially had to put in, when kept in proportion to the stake they will have in the production. What next? India has not been able to attract much foreign interest until now—the reason being it is yet to prove itself as a fertile ground for semiconductor manufacturing. But, once the machine gets running, we can expect India to host semiconductor plants for a wide range of technology nodes catering to a large scale of industries and consumer electronics. Therefore, the modified scheme deserves attention particularly because the previous edition of the scheme had an incremental approach to allocating cost for different ranges depending on the technology node. The 28nm or lower range was eligible for 50 percent of the project cost, 28nm to 45nm for up to 40 percent, and above 45nm up to 65nm were eligible for up to 30 percent of the project cost. Thus, the new model has levelled the playfield for everyone. Speaking about the road ahead for India in its semiconductor ambitions, Mampazhy stressed that India can have lofty ambitions for the future but at present taking the first step is important. He added, “Be it 65nm or 28nm, for the next 2 to 3 years you build a path…that market will be there for these chips for next at least 10 to 15 years and by that time you can decide what is the next way whether you want to go for advanced nodes that have silicon itself or perhaps in a different direction.”
|
Indian government has gone all guns blazing to create a semiconductor ecosystem in India, but how much does it really take to make semiconductor chips?
|
["AI Features"]
|
["semiconductor chips", "Semiconductor India", "semiconductor manufacturing", "Semiconductor Wafer Fabrication"]
|
Ayush Jain
|
2022-11-12T10:00:00
|
2022
| 1,171
|
["Go", "API", "programming_languages:R", "AI", "data_tools:Spark", "programming_languages:Go", "semiconductor manufacturing", "semiconductor chips", "Aim", "analytics", "Semiconductor India", "Semiconductor Wafer Fabrication", "R"]
|
["AI", "analytics", "Aim", "R", "Go", "API", "programming_languages:R", "programming_languages:Go", "data_tools:Spark"]
|
https://analyticsindiamag.com/ai-features/how-much-does-it-cost-to-set-up-semiconductor-fab-in-india/
| 3
| 9
| 3
| false
| true
| false
|
10,061,597
|
A guide to interpretable association rule mining using PyCaret
|
Association rule mining is one of the major concepts in the field of data science that helps mainly in making marketing-related decisions and requires transactional data. Making this procedure interpretable and explainable plays an important role in decision making. In this article, we will discuss association rule mining and we will do a hands-on implementation of this technique using the PyCaret library. Using PyCaret for this task makes it more interpretable and explaining. The major points to be discussed in the article are listed below. Table of contents What is PyCaret?What is association rule mining?Module for association rule miningDataset for association rule miningData conversionModelling association rulesVisualizing association rule mining What is PyCaret? PyCaret is one of the open-source libraries that provide machine learning solutions with the aim of low coding in modelling and hypothesis testing. This library can be utilized in a variety of end-to-end machine learning experiments. Its low coding feature makes the modelling procedure very efficient and low time-consuming. Also, one thing that is noticeable about the library is that the module designed under the library is faster than the manual models. With these all features, this library also provides several interactive visualizations of models and data that can also be used to make the machine learning procedure highly interpretable and explainable. In this article, we will discuss how we can perform association rule mining using the PyCaret library. We can install this library in the Google Colab environment using the following lines of code: !pip install pycaret What is association rule mining? Association rule mining is a rule-generating machine learning method where rules tell us about the strength of the relationship between variables in a large dataset. We mainly find usage of association rules in market basket analysis where a strong positive relation between two products makes the seller sell them together and earn more profit. Even the name of this machine learning method explains what we are trying to do. We are finding association rules between variables from a large dataset. This method mostly intended to find strong rules from a large dataset or database by defining and using some measure of interestingness. For example, if {corn, cheese} → {pizza base} found in the rules that we are mining, will indicate that customers buying cheese and corn together are more likely to also buy pizza. Association rules mining helps in making decisions about marketing activities such as pricing or product placement. In this article, we are going to use the PyCaret library for association rule mining that has a special module for the procedure. Let’s take a look at the module. Module for association rule mining Pycaret has a pycaret.arules module for association rule mining that uses a supervised method of machine learning. This module can be utilized for finding relationship measures between the variables of the dataset. One of the interesting things about the module is that it automatically converts datasets with transaction values into the shape that is required for market basket analysis. Since PyCaret is specially designed for low code machine learning this algorithm also requires low code to design a better model. Dataset for association rule mining We mostly found the usage of association rule mining in market basket analysis. So in this article also we will use samples from the Online Retail Dataset. This dataset contains details of transactions that occurred between 01/12/2010 and 09/12/2011 in an online retail store. This dataset contains the following variables: InvoiceNoStockCodeDescriptionQuantityInvoiceDataUnitPriceCustomerIDCountry We can find the original dataset here. We will be using the dataset that PyCaret provides for practice, we can import the dataset using the following lines of codes. from pycaret.datasets import get_data data = get_data('france') Output : In this implementation, we are using the dataset of France only. In the output, we can see some of the values from the dataset. Now we are ready to implement our association rule mining project. Data conversion After calling the data we are required to import our association rule module and convert the data from transactional data to market basket data shape. We can do this using the following lines of codes. from pycaret.arules import * exp_arul101 = setup(data = data, transaction_id = 'InvoiceNo', item_id = 'Description') Output: Here in the output, we can see the unique number of transactions in our dataset that is the unique count of the invoice number and the unique number of items that we get using the Description column. Since we haven’t ignored any of the items we get no values. Modelling association rules We can simply instantiate a model using the following lines of codes. model1 = create_model() When we talk about the parameters of our choice we can define the following parameters in the model: metricthresholdmin_support round Let’s print the shape of the created rules and head. Here in the output we can see the antecedents and consequents with their support, confidence, lift, leverage, and conviction values. In the above step, we simply created a model. While converting the dataset we have seen an option to ignore items in the output, in the setup module we can define the ignore_item parameter to ignore any item from the list. This we can perform using the following lines of codes. exp_arul101 = setup(data = data, transaction_id = 'InvoiceNo', item_id = 'Description', ignore_items = ['POSTAGE']) Output: Here we can see that we have ignored the item POSTAGE. Let’s model this converted data to find the association rules. Let’s create and print details of our model while ignoring an item. model2 = create_model() print(model2.shape) model2.head() Output: Here we can see the difference between this output and the above output. Visualizing association rule mining The PyCaret library is famous because of one more thing that is interpretability and explainability of models. That means we can visualize our models and their results and understand them better. Let’s visualize our model. Before visualizing models in Google Colab we are required to enable Colab for Pycaret. This can be done using the following lines of codes. from pycaret.utils import enable_colab enable_colab() Output: Let’s plot the model. plot_model(model2) Output: We can see that the visualization that we get is on plotly which means they are interactive. We are not able to post interactive visualizations here. In practice, you can interact with them. We can also plot this visualization in three dimensions. plot_model(model2, plot = '3d') Output: Here, the above output is also interactive and three-dimensional. You can find these visualizations in this notebook. Final words In this article, we have gone through the process that can be followed for implementing solutions based on association rule mining using the PyCaret library. We found that using this python library we can perform this major and difficult task very efficiently and easily. References PyCaret documentationLink for the codes
|
Making association rule mining interpretable and explainable plays an important role in decision making. In this article, we will discuss association rule mining and we will do a hands-on implementation of this technique using the PyCaret library.
|
["Deep Tech"]
|
["AI (Artificial Intelligence)", "Association Rule Learning", "Data Science", "Deep Learning", "Machine Learning", "PyCaret", "Python"]
|
Yugesh Verma
|
2022-02-27T16:00:00
|
2022
| 1,128
|
["data science", "machine learning", "Plotly", "TPU", "AI", "Machine Learning", "PyCaret", "RAG", "Python", "Association Rule Learning", "Aim", "Colab", "Deep Learning", "Data Science", "R", "AI (Artificial Intelligence)"]
|
["AI", "machine learning", "data science", "Aim", "Colab", "Plotly", "RAG", "TPU", "Python", "R"]
|
https://analyticsindiamag.com/deep-tech/a-guide-to-interpretable-association-rule-mining-using-pycaret/
| 3
| 10
| 1
| true
| true
| true
|
10,045,254
|
Move Over CUDA: OpenAI Releases Triton For GPU Developers
|
“GPUs can be incredibly challenging to optimise for locality and parallelism, especially for computations that cannot be efficiently implemented using a combination of pre-existing optimized primitives.”OpenAI At bare minimum, a deep neural network is a bunch of mathematical operations (like addition and multiplication) performed thousands of times every millisecond to find patterns in the input data. The power of deep neural networks (DNNs) come from their hierarchical structure and sequential nature of parametric (eg: convolutional) and non-parametric layers. The highly parallelisable nature of these models were exploited by graphic processing units(GPUs), which were originally designed for PC games to run realistic physics simulations(think: the movement of leaves in a wind). NVIDIA’s first GPU was launched in 1999. Gradually, researchers started to realise the superior floating point performance of these GPUs for general purpose computing and started to apply it aggressively. In 2003, a team of researchers unveiled Brook, the first widely adopted programming model to extend C with data-parallel constructs. Later, NVIDIA launched CUDA in 2006, the world’s first solution for general-computing on GPUs. GPUs quickly became popular with DNNs setting new benchmarks almost every year, especially post the 2012 ImageNet explosion. GPU owes its popularity to frameworks for General-Purpose GPU computing, such as CUDA and OpenCL. Such frameworks have made the development of high-performance programs easier. NVIDIA’s CUDA is a parallel computing platform and programming model for general computing on GPUs. With CUDA, developers were able to dramatically expedite computing applications. GPUs are well suited for DNA due to the distribution of workloads. While the sequential part of the workload runs on the CPU (which is optimised for single-threaded performance), the compute intensive portion of the application runs on thousands of GPU cores in parallel. CUDA developers code in popular languages such as C, C++, Fortran, Python and MATLAB and express parallelism through extensions. (Source: OpenAI) Since its inception, the CUDA ecosystem has grown rapidly to include software development tools, services and partner-based solutions. However, writing GPU kernels is challenging. According to the team at OpenAI, it is difficult to optimise GPUs for locality and parallelism. GPU architectures are also rapidly evolving and specialising (eg: tensor cores in NVIDIA and AMD’s microarchitectures). To address the complexity in GPU programming, OpenAI has open sourced a Python-like programming language called Triton. Triton is a language and compiler for parallel programming. It provides a Python-based programming environment for productively writing custom DNN compute kernels capable of running at maximal throughput on modern GPU hardware. Why Triton (Source: Paper by Tillet et al.,) CUDA implementations of this parallelization strategy can be challenging to write. Popular libraries such as cuBLAS and cuDNN only support a restricted set of tensor operations, leaving the implementation of novel primitives to experts. The practical difficulty of GPU programming and the rise in demand of DNN based applications geared the developers towards Domain-Specific Languages (DSLs) and compilers. However, DSLs–based on polyhedral machinery or scheduling languages,–remain less flexible, and according to OpenAI, are slower than the best handwritten compute kernels available in libraries like cuBLAS, cuDNN or TensorRT. According to the original authors of Triton, these systems generally perform well for certain classes of problems such as depthwise-separable convolutions; are often much slower than vendor libraries in practice; and lack the expressivity necessary to implement structured sparsity patterns required for linear speedup and efficient usage of GPUs. CUDA vs Triton(Image credits) The advantages of Triton come at the expense of increased programming efforts. According to the researchers, Triton relies on the addition of tile-level operations and optimisations into traditional compilation pipelines and provides more flexibility with automatic inference and other features. The purpose of Triton is to provide a stable frontend for DNN transcompilers, as well as programmers familiar with low-level GPU programming. It has CUDA-like syntax, Numpy-like semantics and functions on a “Single-Program, Multiple-Data ” (SPMD) programming model. The execution of CUDA code on GPUs is supported by an SPMD programming model, where each kernel is associated with an identifiable thread-block. The Triton programming model is similar, but each kernel is single-threaded, automatically parallelised and associated with a set of global ranges that varies from instance to instance. This approach leads to simpler kernels in which CUDA-like concurrency primitives are non-existent. It offers programmers more flexibility than current DSLs while allowing compilers to aggressively optimise programs for data locality and parallelism. For instance, when used for element wise operations in neural networks, Triton achieves peak performance with just ~25 lines of Python code compared to CUDA. Try Triton here. You can install the latest stable release of Triton from pip: pip install triton Key takeaways The superior performance of Triton comes from a modular system architecture.Triton makes non-trivial modifications of matrix multiplication kernels accessible to developers with minimum expertise.Triton simplifies the development of specialised kernels.Triton programs can be efficiently and automatically parallelised.
|
OpenAI’s Triton can hit peak hardware performance with relatively little effort
|
["Deep Tech"]
|
[]
|
Ram Sagar
|
2021-08-04T17:00:00
|
2021
| 803
|
["CUDA", "NumPy", "OpenAI", "AI", "neural network", "R", "Python", "C++", "GPU computing"]
|
["AI", "neural network", "OpenAI", "NumPy", "GPU computing", "CUDA", "Python", "R", "C++", "CUDA"]
|
https://analyticsindiamag.com/deep-tech/openai-releases-triton-gpu-developers-cuda/
| 3
| 10
| 0
| true
| false
| true
|
69,401
|
Weekend Hackathon #11: Who Wins the Classic Computer Vision Problem
|
MachineHack is back again with another exciting hackathon for this weekend, and this time we take the data science enthusiasts to the past with the classic computer vision problem Dogs vs Cats. Click here to participate Problem Statement & Description In this hackathon, you will be provided with images of cats and dogs, and you must use your Computer Vision skills to build an image classifier to classify an image as that of a dog or of a cat. In this supervised image classification task, your goal is to classify the images into their respective classes using accuracy as a metric. The Dogs vs Cats is a classic dataset and has been used to train and evaluate models for binary classification tasks. With today’s state-of-the-art Computer Vision models, we expect all the participants to achieve an accuracy of more than 90%. Data Description:- Train (Folder): contains 9471 images of cats and dogsTest (Folder): contains 4059 images of cats and dogsSample_Submission.csv: the format of submission acceptedTrain.csv: contains the file name and appropriate category for each image in the train dataTest.csv: contains the file name for each image in the test data To start quickly, we are providing this tutorial that has a few simple yet effective methods, which you can use to build a powerful image classifier using only a few training examples — just a few hundred or thousand pictures from each class you want to categorize. Building powerful image classification models using very little data The datasets will be made available for download on July 10th, Friday at 6 pm IST This hackathon and the bounty will expire on July 13th, Monday at 7 am IST Sample Submission Format : Click here to participate Bounties The top 3 competitors will receive a free pass to the Computer Vision DevCon 2020. Rules Generic Rules One account per participant/team. Submissions from multiple accounts will lead to disqualification.The submission limit for the hackathon is 10 per day after which the submissions will not be accepted.All registered participants are eligible to compete in the hackathon.We ask that you respect the spirit of the competition and do not cheat.Use of any external dataset is prohibited, and doing so will lead to disqualification. Hackathon Specific Rules Participants must not manually label the images in submission. We work hard to host fair and fun contests and expect the same in return from the participants. However, we hold the right to wield the following measures: Spot check your code at any point in the competition.Disqualifying a participant on failure to provide proof of algorithms within a reasonable time frameRelease a new test data at the end of the competitionAccess to your source code at the end of the competition to verify that the solution does not utilise any unfair meansThis hackathon will expire on 13th July, Monday at 7 am IST Click here to participate Evaluation The submission will be evaluated using the accuracy metric. One can use Sklearn’s accuracy_score to get a valid score.
|
MachineHack is back again with another exciting hackathon for this weekend, and this time we take the data science enthusiasts to the past with the classic computer vision problem Dogs vs Cats. Problem Statement & Description In this hackathon, you will be provided with images of cats and dogs, and you must use your Computer […]
|
["Deep Tech"]
|
["Computer Vision", "Hackathons", "Machinehack", "Machinehack Hackathon"]
|
Amal Nair
|
2020-07-10T17:00:00
|
2020
| 498
|
["data science", "Go", "programming_languages:R", "AI", "Machinehack", "Hackathons", "programming_languages:Go", "computer vision", "Machinehack Hackathon", "Computer Vision", "R", "ai_applications:computer vision"]
|
["AI", "computer vision", "data science", "R", "Go", "programming_languages:R", "programming_languages:Go", "ai_applications:computer vision"]
|
https://analyticsindiamag.com/deep-tech/weekend-hackathon-11-who-wins-the-classic-computer-vision-problem/
| 3
| 8
| 0
| true
| true
| false
|
10,041,194
|
Smart Healthcare: Light At The End Of The Tunnel For India
|
“Digitising healthcare is the key enabler for expanding precision medicine, transforming care delivery, and improving patient experience.”Dileep Mangsuli, Siemens Healthineers According to the Government of India, the doctor-patient ratio is 1:1445, which is lower than what is prescribed by the WHO. India accounts for more than 1.3 billion population with only 2.4% of the world geographical area, the concept of ‘social distancing’ goes for a toss. Rising cases, alarming positivity rate and extended lockdowns have made many health services inaccessible. Digitisation of healthcare has been sort of a reprieve in these troubled times. Announcing a complete digital health ecosystem, PM introduced the National Digital Health Mission comprising health ID, personal health records, Digi Doctor, and health facility registry as key features. Additionally, in contrast to the previous fiscal year, the Finance Minister has raised healthcare spending by nearly 137 per cent in the Union Budget 2021-22 – signalling the shift. “The future of healthcare will be digital. Digitising healthcare is the key enabler for expanding precision medicine, transforming care delivery, and improving patient experience, allowing healthcare providers to increase value through better outcomes,” said Dileep Mangsuli of Siemens Healthineers talking to Analytics India Magazine. People are much more familiar with technology than ever before. Pandemic has changed the traditional lifestyle, and the pattern seems set for the future, with people preferring to receive healthcare services at their doorsteps. “Virtual access, telemedicine, and secure knowledge exchange will strengthen bonds between patients and care teams. Globally, the pandemic has driven the adoption of telemedicine, which is the case in India. A cultural shift to a digital mindset, coupled with new technology, will enable developing learning health systems that improve continually,” explained Dileep. According to Ranganath Jagannath, Director of Growth, Agora, India & SAARC, bringing doctors and patients together in real-time is a big game-changer in healthcare. Using real-time engagement (RTE), medical consultation apps that allow patients and doctors to engage via text, voice and video chat are significantly easing the load on existing infrastructure. “Low-latency, high-quality experience of RTE allows caregivers and patients to not only conduct secure, remote voice and video consultations but also facilitates the transfer of medical data and records,” he added. As the pandemic sets in, the demand for fitness and online wellness sessions to stay fit witnesses tremendous growth. “We expect virtual sessions for mental wellbeing and fitness to be redefined considerably now as well as post-pandemic. For example, MixPose, a streaming platform for yoga instructors and fitness professionals, uses real-time engagement to build interactive live streaming into its platform, allowing instructors and viewers to capture the experience of a guided session,” said Ranganath. Nupur Khandelwal, Co-Founder, Navia Life Care believes that the Covid-19 pandemic has turned our health care system upside down and challenged consumers’ sense of wellbeing. One of the most significant impacts that the industry witnessed is the shift from hospital-based services to telemedicine and virtual care. Consumers are now using virtual visits more than ever before. “The crisis has accelerated the already planned shift in the healthcare continuum with the adoption of virtual and AI-driven tools. Post-crisis, AI and IoT embedded solutions will rule the healthcare industry. Post-crisis, AI and IoT embedded solutions will rule the healthcare industry. Smart healthcare will change our approach from patient-centric prevention to person-centric prevention,” she said. By seamlessly transmitting data into a patient’s electronic health record, the requirement for efficiency and touch-free interactions has the potential to expand clinical use of natural language processing — a branch of AI that allows computers to interpret spoken statements. Strong data analytics, record-keeping and real-time tracking platforms are key for health organisations to adopt. However, lack of proper data governance policies still continues to create roadblocks for widespread adoption of cutting edge services. The combined expenditure by the Centre and States on health accounts for about 1.5 per cent of India’s GDP or Rs 3 per person per day. It falls well short of the 2.5 percent goal set by the National Health Policy of 2017. The health infrastructure has crumbled, medical staff fall short, spending on R&D is a mere 0.65% of the GDP and whatnot. The pandemic has proved that it is important to rethink healthcare strategies of the country and AI empowered digitisation can be the torchbearer.
|
“Digitising healthcare is the key enabler for expanding precision medicine, transforming care delivery, and improving patient experience.” Dileep Mangsuli, Siemens Healthineers According to the Government of India, the doctor-patient ratio is 1:1445, which is lower than what is prescribed by the WHO. India accounts for more than 1.3 billion population with only 2.4% of the […]
|
["AI Trends"]
|
[]
|
kumar Gandharv
|
2021-06-03T14:00:00
|
2021
| 710
|
["Go", "programming_languages:R", "AI", "ML", "Git", "ViT", "analytics", "data governance", "GAN", "R"]
|
["AI", "ML", "analytics", "R", "Go", "Git", "data governance", "GAN", "ViT", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-trends/smart-healthcare-light-at-the-end-of-the-tunnel-for-india/
| 3
| 10
| 1
| false
| true
| false
|
40,558
|
5 Challenges In A Data-Driven Project And How To Overcome Them
|
The term data-driven means to build tools and abilities in order to act on data. As a foundation of the data-driven project, a broad and deep understanding of the content, structure, quality issues, as well as necessary transformations along with appropriate tools and technological resources are required. Tools and technologies keep evolving constantly. At the current scenario, one of the crucial tasks in an organisation while managing data is the analysis of numerous amount of data. Working with the data is not an easy task and can be time-consuming. In this article, we list down 5 challenges which can be seen in a data-driven project and measures to avoid them. 1| Data Quality The process of discovering data can is a crucial and fundamental task in a data-driven project. The approaches for quality of data can be discovered based on certain requirements, such as user-centred and other organisational frameworks. How to Avoid The methods such as data profiling and data exploration will help the analysers to investigate the quality of datasets as well as the implications of their use. The data quality cycle must be followed in order to make the best practice for improving and ensuring high data quality. 2| Data Integration In general, the method of combining data from various sources and store it together to get a unified view is known as data integration. Inconsistent data in an organisation is likely to have data integration issues. How To Avoid There are several data integration platforms such as Talend, Adeptia, Actian, QlikView, etc. which can be used to solve complex issues of data integration. These tools provide features for data integration such as automate and orchestrate transformations, build extensible frameworks, automate query performance optimisation, etc. 3| Dirty Data Data which contains inaccurate information can be said as dirty data. To remove the dirty data from a dataset is virtually impossible. Depending on the severity of the errors, strategies to work with dirty data needs to be implemented. There are basically six types of dirty data, they are mentioned below Inaccurate Data: In this case, the data can be technically correct but inaccurate for the organisation. Incorrect Data: Incorrect data occurs when field values are created outside of the valid range of values. Duplicate Data: Duplicate data may occur due to reasons such as repeated submissions, improper data joining, etc. Inconsistent Data: Data redundancy is one of the main causes of inconsistent data. Incomplete Data: This is due to the data with missing values. Business Rule Violation: This type of data violates the business rule in an organisation. How To Avoid This challenge can be overcome when the organisations hire data management experts in order to cleanse, validate, replace, delete the raw and unstructured data. There are also data cleansing tools or data scrubbing tools such as TIBCO Clarity available in the market to clean the dirty data. Click here to know more. 4| Data Uncertainty Reasons for data uncertainties can be ranged from measurement errors, processing errors, etc. Known and unknown errors, as well as uncertainties, should be expected when using real-world data. There are five common types of uncertainty and they are mentioned below: Measurement Precision: Approximation leads to uncertainty. Predictions: It can be projections of future events, which may or may not happen. Inconsistency: Inconsistency between experts in a field or across datasets is an indication of uncertainty. Incompleteness: Incompleteness in datasets including missing data or data known to be erroneous also causes uncertainty. Credibility: Credibility of data or of the source of data is another type of uncertainty How To Avoid There are powerful uncertainty quantification and analytics software tools such as SmartUQ, UQlab, etc. which is used to reduce the time, expense, and uncertainty associated with simulations, testing, and analyzing complex systems. 5| Data Transformation Raw data from various sources most often don’t work well together and thus it needs to be cleaned and normalised. Data Transformation can be said as the method of converting the data from one format to another in order to gain meaningful insights from the data. Data Transformation can also be known as ETL (Extract Transform Load) which helps in converting raw data source into a validated and clean form for gaining positive insights. Although the whole data can be transformed into a usable form, yet there remain some issues which can go wrong with the ETL project such as an increase in data velocity, time cost of fixing broken data connections, etc. How To Avoid There are various ETL tools such as Ketl, Jedox, etc. which can be used to extract data and store it in the proper format for the purpose of analysis. Bottom Line With the emerging technologies, data-driven projects have become fundamental in the path of success for an organisation. Data is a valuable asset in an organisation which comes in various sizes. The road to get a successful data-driven project is to overcome these challenges as much as possible. There are numerous tools available nowadays in the market to extract valuable patterns from the unstructured data.
|
The term data-driven means to build tools and abilities in order to act on data. As a foundation of the data-driven project, a broad and deep understanding of the content, structure, quality issues, as well as necessary transformations along with appropriate tools and technological resources are required. Tools and technologies keep evolving constantly. At the […]
|
[]
|
["data cleansing"]
|
Ambika Choudhury
|
2019-06-11T08:13:34
|
2019
| 842
|
["Go", "programming_languages:R", "AI", "ETL", "data-driven", "programming_languages:Go", "data cleansing", "analytics", "data quality", "GAN", "R"]
|
["AI", "analytics", "R", "Go", "ETL", "data quality", "GAN", "data-driven", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-features/5-challenges-in-a-data-driven-project-and-how-to-overcome-them/
| 2
| 10
| 2
| false
| true
| false
|
13,550
|
Hotstar and Zapr tie-up to tap on the mobile audience analytics
|
Stressing on deep user engagement and segmentation of the audience on mobile marketing platforms, Hotstar, one of the leading digital streaming platforms in India has partnered with Zapr Media Labs, a Bangalore-based media tech company. With this partnership, Hotstar, a Star India Ltd initiative, intends to drive the next wave of mobile audience analytics in India. The strategic partnership between the two brands would help in creating a better understanding of the mobile audience that can further be used by brands for personalized communication with the audiences through ads and offers. The advertisers many soon have access to the first set of consumer analytics data as the companies would be sharing it in the coming few weeks. Why the need of analytics in mobile streaming platforms? Though internet has been raging the mobile screens for a while, the mobile marketing options hasn’t been tapped fully till date. This could be attributed to lack of platforms that could track user engagement and segmentation. This means that though there has been a significant amount of display ads, the marketers have failed to build up a significant brand value due to lack of audience analytics. This partnership can break these short fallings and set out a roadmap for advertisers that can get them the most of the digital platforms. With more than 200 million downloads till date, Hotstar believes that with this partnership, marketers would be able to leverage deep audience analytics and create better brand building at mobile platforms. “Hotstar, through this partnership, intends to evolve into a full-fledged technology and analytics company that will shape the next wave of mobile usage and advertising in India. Hotstar and Zapr will together create a deeper understanding of mobile audiences that can be leveraged by brands to drive impactful audience targeting as well as create personalized communication and offers”, Zapr wrote in its blog. Why Zapr Media? It is worth noting that Zapr media has rapidly grown into one of the India’s largest media consumption repositories with millions of user data, that is significantly more than others in the industry. It has been analysing television viewership across various channels in India, providing targeted digital analytics and insight into offline consumer behaviour. The company also boasts of an analytics platform, built by the company itself, that lets the marketers have better understanding of the audiences and target them better, thanks to the huge repository of data that it analyses. This partnership is accompanied by a minority investment in Zapr Media by Hotstar, the amount and terms of which hasn’t been disclosed yet.
|
Stressing on deep user engagement and segmentation of the audience on mobile marketing platforms, Hotstar, one of the leading digital streaming platforms in India has partnered with Zapr Media Labs, a Bangalore-based media tech company. With this partnership, Hotstar, a Star India Ltd initiative, intends to drive the next wave of mobile audience analytics in […]
|
["AI News"]
|
[]
|
Srishti Deoras
|
2017-03-16T11:52:08
|
2017
| 427
|
["API", "programming_languages:R", "AI", "Git", "RAG", "analytics", "R", "analytics platform"]
|
["AI", "analytics", "RAG", "R", "Git", "API", "analytics platform", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-news-updates/hotstar-zapr-tie-tap-mobile-audience-analytics/
| 2
| 8
| 2
| false
| false
| false
|
61,617
|
Uber Open-Sources Fiber – A New Library For Distributed Machine Learning
|
Latest technologies such as machine learning and deep learning require a colossal amount of data to improve its outcomes’ accuracy. However, it is nearly impossible for a local computer to process the vast amount of data. As a result, practitioners use distributed computing for obtaining high-computational power to deliver quick and accurate results. However, effectively managing distributed computation is not straightforward, and this causes hindrance in training and evaluating AI models. To address these challenges, Uber has open-sourced its Fiber framework to help researchers and developers streamline their large-scale parallel scientific computation. Fiber Fiber is a Python-based distributed computing framework for modern computer clusters. With Fiber, users are not limited to programming only on desktop or laptop, but the whole computer cluster. Initially, Uber built Fiber to support complex projects like POET and similar projects that required distributed computing, but today, it has open-sourced the framework for the larger community. Key features of Fiber: Easy to use: Leveraging the framework, one can write programs that run on the computer cluster, without the requirement for deep-dive into details of the computer cluster.Easy to learn: If one is familiar with standard multiprocessing Python’s API, then they will not require any other expertise to work with Fiber.Fast performance: For a quick and reliable connection, Fiber’s communication backbone is built on Nanomsg, which is a high-performance asynchronous messaging library.No need for deployment: You can run Fiber applications the same way as running a typical software on a computer cluster, and Fiber handles the rest for you. Reliable computation: Fiber also has a built-in error handling function that helps users run a pool of processes. This allows them to focus on writing the actual application code, instead of dealing with crashed workers. Fiber can also work in tandem with specialized frameworks in areas where performance is critical. To accomplish this, you need to use Fiber’s Ring feature, which assists in setting up a distributed training job on computer clusters. Architecture Fiber is developed in a way that retains flexibility, such that it can support different backends working on various cluster management systems. For this, the Fiber is divided into several layers, such as API layer, backend layer and cluster layer. While the API layer acts as fundamental blocks for Fiber-like processes, queues, pools, and managers, the backend layer handles tasks like creating or terminating jobs on various cluster managers. The cluster layer consists of different cluster managers that assist in effectively managing resources and tracking different jobs. How Is It Different? Unlike other distributed machine learning tools, Fiber introduces a new concept called ‘job-backed processes’ or ‘Fiber process’. Although it is similar to Python’s multiprocessing library, Fiber comes with more flexibility – apart from running locally, it can also execute remotely on different machines. It is because every job-backed process is containerized and has its allocation of CPU, GPU, among other resources. Besides, codes are self-contained as all the child processes are started with the same container image as the parent process to guarantee a consistent running environment without relying on other activities. Furthermore, unlike Spark and IPyParallel, Fiber only needs to be installed on a single machine as a standard Python pip package, thereby simplifying the workflows and at the same time, giving more control. Other Components Queues and pipes: The library behaves like other multiprocessing APIs, but the difference is that they are now shared by multiple activities running on different machines. Therefore, two different procedures can read and write from the same pipe. Besides, each process can send to or receive from the same queue at the same time. Pools, Managers and Proxy Objects: Fiber has extended the pools to work with job-backed processes to manage thousands of remote work. Besides, Fiber through Managers and Proxy Objects provides built-in-memory storage for applications. This was earlier carried out with external storage like Cassandra, Redis, and more. Fiber Rings: It is a concept where all the processes work collectively as relative equals. Thus, unlike Pool, Ring does not have the idea of a master and worker processes. Fiber Ring also helps in setting up a topology, which is common in machine learning practices while carrying out distributed SGD. Usually, this is a challenging task, but Ring simplifies it as it does all the heavy lifting. Outlook Since large-scale solutions are highly reliant on clusters, Fiber can assist users in achieving many goals with heterogeneous computing hardware, while ensuring the resources are used effectively. Fiber works almost like other frameworks, but has unique advantages that can be a game-changer for developers to simplify their workloads while developing AI-based solutions. It promises to bridge the gap between making code work locally and running it on a production cluster. The ability to add new dependencies to the code without the need for re-deployment might make Fiber standout against popular tools like Spark. Also Read: Top 10 Books For Learning Apache Spark
|
Latest technologies such as machine learning and deep learning require a colossal amount of data to improve its outcomes’ accuracy. However, it is nearly impossible for a local computer to process the vast amount of data. As a result, practitioners use distributed computing for obtaining high-computational power to deliver quick and accurate results. However, effectively […]
|
[]
|
["Machine Learning"]
|
Rohit Yadav
|
2020-04-14T18:00:00
|
2020
| 814
|
["machine learning", "AI", "ML", "distributed computing", "Machine Learning", "Apache Spark", "RAG", "Python", "deep learning", "R", "Redis"]
|
["AI", "machine learning", "ML", "deep learning", "RAG", "distributed computing", "Apache Spark", "Redis", "Python", "R"]
|
https://analyticsindiamag.com/ai-features/uber-open-sources-fiber-a-new-library-for-distributed-machine-learning/
| 3
| 10
| 1
| true
| false
| true
|
10,097,632
|
Meta Reports its Most Profitable Quarter Since 2021, Stocks Surge 7%
|
Meta Platforms, Inc. reported strong financial results for the second quarter of 2023. The company’s revenue for the quarter was $31.999 billion, showing an 11% increase compared to the same period in 2022. Despite higher expenses, Meta’s income from operations rose by 12% year-over-year, reaching $9.392 billion. The net income for the quarter was reported to be $7.788 billion, a significant 16% increase from Q2 2022, and Diluted Earnings per Share (EPS) stood at $2.98, showing a 21% increase from the same period last year. “We had a good quarter. We continue to see strong engagement across our apps and we have the most exciting roadmap I’ve seen in a while with Llama 2, Threads, Reels, new AI products in the pipeline, and the launch of Quest 3 this fall,” Meta founder and CEO Mark Zuckerberg said during the earnings call. This marked Meta’s most profitable quarter since 2021, partially driven by significant revenue growth from their short-form video product, Reels, leading to a 7% surge in the company’s stock during after-market trading. The company experienced significant growth in user engagement, with Family Daily Active People (DAP) averaging 3.07 billion for June 2023. Family Monthly Active People (MAP) reached 3.88 billion as of June 30, 2023, indicating a 6% increase year-over-year. Facebook Daily Active Users (DAUs) were 2.06 billion on average for June 2023, representing a 5% increase from the same period in 2022. Facebook Monthly Active Users (MAUs) were 3.03 billion as of June 30, 2023, showing a 3% increase year-over-year. In terms of advertising, ad impressions delivered across their Family of Apps increased by 34% year-over-year in Q2 2023. However, the average price per ad decreased by 16% year-over-year. Meta’s financial position as of June 30, 2023, showed $53.45 billion in cash, cash equivalents, and marketable securities. On the other hand, the company’s long-term debt reached $18.38 billion as of the same date. The company implemented restructuring measures to enhance efficiency and align its business and strategic priorities, with most of the planned employee layoffs completed as of June 30, 2023. Further assessments were ongoing regarding facilities consolidation and data center restructuring initiatives. For the outlook of the third quarter of 2023, Meta expects total revenue to be in the range of $32-34.5 billion, with a foreign currency tailwind of approximately 3% to year-over-year total revenue growth. The company also anticipates total expenses for the full year 2023 to be in the range of $88-91 billion. However, Meta foresees higher operating losses in 2023 for Reality Labs due to investments in augmented reality/virtual reality and ecosystem scaling. The company acknowledges potential regulatory challenges in the EU and the US, which could significantly impact their business and financial results. Overall, Meta’s financial results for Q2 2023 showcase positive growth and performance across various metrics, alongside an optimistic outlook for the future. The company’s focus on the metaverse, AI, and ongoing product development efforts will be closely monitored by investors and analysts.
|
Meta Platforms, Inc. reported strong financial results for the second quarter of 2023. The company’s revenue for the quarter was $31.999 billion, showing an 11% increase compared to the same period in 2022. Despite higher expenses, Meta’s income from operations rose by 12% year-over-year, reaching $9.392 billion. The net income for the quarter was reported […]
|
["AI News"]
|
["Metaverse"]
|
Shyam Nandan Upadhyay
|
2023-07-27T10:21:36
|
2023
| 492
|
["Go", "programming_languages:R", "AI", "Metaverse", "programming_languages:Go", "RAG", "llm_models:Llama", "R"]
|
["AI", "RAG", "R", "Go", "llm_models:Llama", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-news-updates/metas-stocks-surge-7-after-reporting-its-most-profitable-quarter-since-2021/
| 3
| 7
| 2
| false
| false
| false
|
10,040,207
|
Andrew Ng’s DeepLearning.AI launches New Course On MLOps
|
Recently, Andrew Ng took to the professional networking platform to announce that the Specialization on Machine Learning Engineering for Production (MLOps) by DeepLearning.AI is now available on Coursera. The Machine Learning Engineering for Production (MLOps) Specialization covers how to conceptualize, build, and maintain integrated systems that continuously operate in production. NG posted, “I’m thrilled DeepLearning.AI’s Machine Learning Engineering for Production (MLOps) specialization is now available on Coursera!” “Being able to train ML models is essential. And, to build an effective AI career, you need production engineering skills as well, to build and deploy ML systems. With this specialization, you can grow your knowledge of ML into production-ready skills” the AI expert added. In striking contrast with standard machine learning modelling, production systems need to handle relentless evolving data. Moreover, the production system must run non-stop at the minimum cost while producing the maximum performance. In this Specialization, you will learn how to use well-established tools and methodologies for doing all of this effectively and efficiently. The instructors of this course are Robert Crowe, TensorFlow Developer Engineer at Google, Laurence Moroney, who is the Lead AI Advocate at Google and Andrew Ng. In this Specialization, you will become familiar with the capabilities, challenges, and consequences of machine learning engineering in production. Some of the important topics that you will learn here include- Designing a machine learning production system end-to-end. Establish a model baseline, address concept drift and prototype, how to develop, deploy and continuously improve a productionized machine learning application. You will be building data pipelines by collecting, cleaning as well as validating datasets. You will also learn how to establish a data lifecycle by using data lineage and provenance metadata tools. Lastly, you will learn to apply the best practices and progressive delivery techniques to maintain as well as monitor a continuous production system. By the end of this course, you will be ready to perform several interesting tasks, such as designing an end-to-end ML production system, build data pipelines, establish data lifecycle, apply techniques to manage modelling resources, use analytics to address the fairness of ML models, implement feature engineering, deliver deployment pipelines for model serving and more. The course will take approximately 3 months to complete and you can also earn a certificate upon completion of the course. Enroll here.
|
Recently, Andrew Ng took to the professional networking platform to announce that the Specialization on Machine Learning Engineering for Production (MLOps) by DeepLearning.AI is now available on Coursera. The Machine Learning Engineering for Production (MLOps) Specialization covers how to conceptualize, build, and maintain integrated systems that continuously operate in production. NG posted, “I’m thrilled DeepLearning.AI’s […]
|
["AI News"]
|
["Andrew Ng", "andrew ng ai", "andrew ng artificial intelligence", "Andrew ng deeplearning.ai"]
|
Ambika Choudhury
|
2021-05-14T17:42:47
|
2021
| 382
|
["andrew ng ai", "Andrew ng deeplearning.ai", "andrew ng artificial intelligence", "Go", "machine learning", "data lineage", "AI", "ML", "MLOps", "data pipeline", "Andrew Ng", "analytics", "TensorFlow", "R"]
|
["AI", "machine learning", "ML", "analytics", "MLOps", "TensorFlow", "R", "Go", "data pipeline", "data lineage"]
|
https://analyticsindiamag.com/ai-news-updates/andrew-ngs-deeplearning-ai-launches-new-course-on-mlops/
| 2
| 10
| 1
| false
| true
| false
|
10,074,663
|
Digital Twins Is A Clever Way To Fight Urban Floods
|
The Silicon Valley of India has been underwater for the past few days. Excessive rains coupled with poor infrastructure has left many parts of the cities submerged. Urban floods are not uncommon in India. In the past few years, bigger cities like Chennai (2015) and Mumbai (2019) have faced the wrath of heavy downpours in a limited time period—eventually leading to floods. Interestingly, these are the cities where much of the modern smart infrastructure can be found. The case of Bengaluru city is even more interesting as it is widely hailed as the technology capital of the country—with a keen integration of tech in several aspects of everyday city life. With the tech capital crumbling in the wake of incessant rains, the question naturally arises if there is a technological way out of this mess. ‘Digital Twins’ could be the answer ‘Digital Twins’, a concept put forth by David Gelernter in his 1991 book, ‘Mirror Worlds’, could be a potential solution for urban flood resilience. Digital Twins are digital or virtual copies of physical assets or products. They connect the real and the virtual world by collecting real-time data from installed sensors. Digital Twins for Urban Flood Resilience: Examples from around the world Many such flood-prone nations and their city authorities have resorted to ‘digital twins’ in order to ensure prompt and efficient response to such flood events—occurring due to torrential rains or other natural causes. This adoption of digital twins is also to ensure the mitigation of the havoc that they might wreak on the lives and livelihoods of those most affected by them. In 2018, Newcastle University and Northumbrian Water came together with a plan to build a digital twin for the city of Newcastle to test hypothetical emergencies in the face of global climate change. In 2021, a team of researchers from McMaster University, Canada, published a paper titled, ‘DIGITAL TWIN: A CITY-SCALE FLOOD IMITATION FRAMEWORK’, which laid down a framework for creating a city digital twin to simulate flood hazards. They created the digital twin for the city of Calgary through the integration of emerging technologies like sensors, Internet of Things, and satellites, infrastructure modelling, hydrology and hydraulic modelling, demographic information and interoperability translators and then calibrated with the historical flood of 2013. The idea was to simulate the effect of flood hazards and reduce the casualties and economic losses from such events. Digital Twin Visualisation of Calgary: (a) Before flood (b) During Flood Recently, the city government of Lisbon also decided to resort to digital twins for urban flood simulation so as to build the city’s flood resilience in the wake of extreme rainfall events and risks of rising sea levels. The objective was to use cutting-edge technology to build a drainage master plan for the city such that the working of the existing drainage infrastructure is optimised and proactive measures taken in the wake of emergencies. The entire water cycle system from water distribution to storm water drainage management in the city of Porto, Portugal is supported by digital twin technology. Digital Twins effective for urban flood resilience Creating a digital twin for flood resilience in a city isn’t an easy task. It requires integration of several technologies like sensors, satellites, internet of things, city-scale infrastructure modelling, 3D mapping, hydrology and hydraulic modelling, interoperability translators along with demographic information and real time system behaviour. Notwithstanding the cumbersomeness, it can fetch effective results. Flood resilience models created with the help of digital twins provide continuous imitations of hazards affecting the city infrastructures. They can identify vulnerable locations across the city and enhance the city’s resilience in the wake of climate-induced hazards. Such virtual copies of physical assets could help develop reliable preparedness plans, mitigation strategies and test an infinite number of potential future emergencies. They can also tell the authorities which buildings are likely to be flooded and which infrastructure should be closed down in such an event. The technology could also determine human behaviour. For example, the most likely routes that people would use in the event of a flood. In the case of Lisbon, the flood resilience model provided the authorities with a virtual representation of the impact of storm water flows in the city’s infrastructure and assets based on the velocity and direction of the flow of the storm water. Banking on this information, the city authorities were able to build a fool-proof drainage master plan. It enabled them to identify the best trajectory and the size of the tunnels that were to be constructed. On the whole, it helped them to understand, evaluate and respond effectively to extreme weather events. Now, the city is able to efficiently analyse the water supply, wastewater and storm water to forecast flood risks and take preventive measures.
|
According to a Nasscom report, Bengaluru accounts for a quarter India’s tech talent.Authorities can always take the help of the city’s very own techies to come up with cutting edge technological solution.
|
["IT Services"]
|
[]
|
Zinnia Banerjee
|
2022-09-08T18:00:00
|
2022
| 792
|
["Go", "API", "programming_languages:R", "AI", "programming_languages:Go", "Git", "R"]
|
["AI", "R", "Go", "Git", "API", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/it-services/digital-twins-is-a-clever-way-to-fight-urban-floods/
| 2
| 7
| 0
| false
| true
| true
|
10,088,011
|
ChatGPT vs The World
|
In a fair world, technology should be accessible to all and used for humanity’s betterment. However, there is a growing concern that AI could be monopolised for vested interests by large corporations. This is why open sourcing is paramount. ChatGPT, the popular chatbot by OpenAI, has been one of the biggest breakthroughs in AI. In just five days since its launch, approximately one million individuals engaged with the bot and it is expected that the number will soon reach the billion mark. Its popularity escalated to such an extent that it was featured on the cover of Times Magazine. However, ChatGPT is entirely under the control of OpenAI and no external parties are aware of the methodology or the construction behind it. Neither has OpenAI open sourced its GPT3 language model. So far, only Google has launched a ChatGPT competitor in Bard. Amazon’s chief executive Andy Jassy also told the Financial Times that the e-commerce giant has been working on a ChatGPT competitor for some time now. Besides, China-based ‘Baidu’ is also working on something similar; however, none of them are—or are likely to be—open source. Open source ChatGPT competitors Without doubt, ChatGPT has been one of the biggest breakthroughs in AI. Today, everyone—from individuals to multiple enterprises in diverse fields—is looking to leverage the technology. However, OpenAI has not released the code for ChatGPT. This makes it difficult for the people outside OpenAI to recreate such models. To tackle this, Emad Mostaque revealed on Twitter earlier this month that his firm ‘Stability AI’, is working on an open-source version of ChatGPT. This is a good take. We are working on open chatGPT (android vs iOS eh) and folk will build on both.Value creation will be decent but nothing compared to the crazy disruptive stuff that will come in the following years.Cost to buidl on this minimal, costs scale with revenue https://t.co/ubqXdc6ZTU— Emad (@EMostaque) January 5, 2023 Similarly, Hugging Face—the company that bootstrapped the BigScience collaboration—is also working on an open source ChatGPT rival. The company has also partnered with Amazon Web Services (AWS) for the next iteration of their BLOOM Large Language Model as well as its open-source ChatGPT rival. Further, AI startup Colossal-AI has also found a way to build their own ChatGPT with less computing resources. (Source: Colossal AI blog) To achieve this objective, the organisation has utilised a PyTorch-based implementation that encompasses all three stages, including pre-training, reward model training, and reinforcement learning. They offer a demo version of the training process that requires only 1.62 GB of GPU memory and can be done on a single consumer-grade GPU, with 10.3x growth on one GPU model capacity. The importance of open source Over the years, large corporations have developed many Large Language Models (LLM); however, most of them were not open source. To counter this, around 1000 researchers from different parts of the globe came together to launch BLOOM, an open source LLM trained in complete transparency. Similarly, when OpenAI released DALL-E 2, the AI text-to-image generator, the internet fell in love with it. However, neither DALL-E 2 or other similar models such as Midjourney and Imagen by Google were open source. Enter Stable Diffusion, a similar open source text-to-image model by Stability AI. Founder Emad Mostaque, a firm believer in open source, wants to remove many of the barriers, such as compute and funding for independent and academic researchers to build some of these new AI models. Now, we are in the age of ChatGPT, which despite being an impressive tool, has its shortcomings. Apart from blurting out hallucinatory responses and convincingly suggesting incorrect answers as correct from time to time, it is also politically biased. ChatGPT now spouts only politically correct nonsense on various topics, indicating that bias was introduced. It used to offer pros and cons on contentious topics. To be trusted, AI must be open source. More importantly, the sources that were used to "train" it must be disclosed.— Greg Utas (@GregUtas) February 22, 2023 Open sourcing promotes technological agility and would allow the community to collaborate and address the drawbacks of ChatGPT, thus resulting in improvements at a lower cost. Besides increased collaboration among the community, open sourcing allows users to inspect the source code and understand how the algorithms and models work. This transparency promotes accountability and trust in AI. “Remember the explosion when DALL-E levelled up via the open source Stable Diffusion? AI will level up yet again when open source ChatGPT hits the streets,” Jeff Garzik, co-founder at Bloq, tweeted. Open source is a massive advantage in AI. It’s the reason every new paper is implemented on stable diffusion rather than Dall-E.— Varun Mayya (@waitin4agi_) February 23, 2023 Open sourcing also encourages innovation by allowing users to build upon existing code and contribute to the development of new features and functionalities. “I believe that while ChatGPT-like models are probably inaccessible for people to train and develop right now, this will change very shortly with the combination of more label-efficient approaches and open-source initiatives,” Tanishq Mathew Abraham, Founder and CEO at MedARC AI, said. Besides, OpenAI asserts that it is developing this technology for the betterment of humanity. Therefore, it would be logical for them to open source ChatGPT, allowing the entire community to collaborate and help the technology scale. But, will OpenAI make ChatGPT open source? Unlikely. Do you think OpenAI could have remained a open source, non-profit company and still achieve what's it's doing with ChatGPT, etc?— Dave Lee (@heydave7) February 17, 2023
|
Open sourcing promotes technological agility and would allow the community to collaborate and address the drawbacks of ChatGPT.
|
["Deep Tech"]
|
["AI Tool"]
|
Pritam Bordoloi
|
2023-02-23T18:41:19
|
2023
| 916
|
["Go", "ChatGPT", "Hugging Face", "OpenAI", "PyTorch", "AI", "AWS", "RAG", "Rust", "AI Tool", "R"]
|
["AI", "ChatGPT", "OpenAI", "PyTorch", "Hugging Face", "RAG", "AWS", "R", "Go", "Rust"]
|
https://analyticsindiamag.com/deep-tech/chatgpt-vs-the-world/
| 4
| 10
| 2
| true
| false
| true
|
32,812
|
Perspectives on AI Adoption Strategy and Trends in Singapore
|
Singapore is the financial hub of South East Asia. Given the number of financial firms that have setup their operations in the country, one wonders how much the organizations there have adopted AI and data science. I was in Singapore last month and spoke to various leaders in the financial industry there. Here’s my interview with Johnson Poh, who has been a data scientist with experience spanning across the finance, consulting and government sectors for the past decade. His professional appointments include being Head Data Science/Practice Lead at DBS Bank, Chief Data Scientist, ASEAN at Booz Allen Hamilton as well as Principal Data Scientist at Ministry of Defence, Singapore respectively. He also serves as an adjunct faculty at SMU School of Information Systems and his focus areas include applied statistical computing, machine learning as well as big data tools and techniques. Johnson completed his bachelor’s degree at University of California, Berkeley, majoring in the subjects of Pure Mathematics, Statistics and Economics. He received his postgraduate degree in Statistics at Yale University. Analytics India Magazine: How important is Data science & AI in the financial industry? Johnson Poh: Financial institutions are taking a more data-driven approach with the adoption of modern computing stacks and the development of end-to-end big data pipelines. While the pace of progress varies across financial institutions, there are already a handful of organisations that have data analytics teams set up to work hand-in-hand with the business to apply machine learning and AI techniques across a variety of areas including acquisition, cost reduction and customer service. Within the financial industry, the restructuring of people, platforms and processes around a data driven mantra has harnessed data and accelerated the adoption of machine learning and AI implementation. AIM: What are some use cases of data science or AI that you have worked on? JP: Machine learning and AI techniques has been applied across the financial industry to deliver better customer service, reduce operating cost and detect fraud. For instance, reinforcement learning techniques have been applied in resource and logistics optimisation to lower downtime and reduce operating cost. Elsewhere in the business, machine learning techniques have been applied to customer service call centres in the area of forecasting so that manpower allocation can be made more flexible to the peaks and troughs of demand. There has also been more focus on making continued improvements in fraud detection and anti-money laundering with the use of a variety of data sources and the application of supervised and unsupervised learning techniques. AIM: How do you see the analytics ecosystem flourishing especially in south east Asia region? JP: Given the explosion of data sources, there is vast potential in the Southeast Asia region where machine learning and AI can be applied. Across the region, more banks are partnering e-commerce and telecommunication firms to leverage on data to support customer services. Cross industry data sharing will lead to significant improvements in customer background checks and credit underwriting, which will help serve a larger variety of customer segments. Apart from the business-to-business partnerships, governments in the region are also strengthening government-verified personal data services. Singapore has been at the forefront of this effort, allowing banks to use government data services, via open-APIs. Customers, who seek loan financing, no longer have to prepare and submit thick stacks of documentation. With SingPass, a customer can give consent for a bank to authenticate and verify his ability to repay using government-verified personal data for loan underwriting. AIM: What are some of the challenges that the industry in Singapore faces in terms of AI adoption? JP: AI technology is ready for adoption. However, regulations and information security policies will need to catch on with the ever-changing technology landscape. Governments will need to provide the right regulatory environment for banks to operate and stay competitive, while protecting its citizens’ personal data privacy. Organisations will need to review its information security policies to find the right balance between enabling business to implement AI-backed initiatives and implementing the necessary safeguards to ensure the security of customers’ data. There is urgency to dedicate focus and attention on this challenge so that we can keep pace with the momentum of AI adoption and stay ahead of the game. AIM: How can governments and citizen associations come together for a healthy discussion as well as implementation of AI? JP: Governments have a pivotal role to play in pushing the boundaries of AI implementation for social good, as well as putting in place an adequate regulatory environment to prevent the abuse of data and AI initiative. Having open dialogue on a range of machine learning and AI topics that concerns citizens and industry players is a good start. On one end of the spectrum, there is a need to discuss how machine learning and AI can be used to support social initiatives in areas including e-services, healthcare, tax reporting and law enforcement. On the other end of the spectrum, there is a need to take into account concerns over the extent to which data can and should be used by public and private sectors so that data privacy laws can adequate protect citizens and consumers. AIM: Is AI talent an issue in Singapore? If yes, how can we resolve this? JP: In recent years, the education system has pivoted towards an emphasis on fields such as mathematics, statistics, computer science and information technology. Business schools have also woven data science and AI elements into its curriculum. This is a positive trend that will expand our talent pool in the field of data science. However, with the influx of data science graduates in the workforce, time is needed to build up the experience of our talent pool in applying machine learning and AI techniques across the public and private sectors in meaningful ways. Remember, data science is an applied subject. The sooner we exercise the ability to implement, iterate and validate data science models in the business context, we faster we can achieve robustness in making data science and AI more useful and relevant. Many financial institutions have also started to foster a conducive learning environment to support their in-house data scientists by offering training programmes and development opportunities. AIM: What is the biggest trend in data science/ AI that you look forward to in 2019? JP: 2019 will be the year of data science operationalization and a demand for machine learning engineers. Many organisations over the past years, especially in the financial sector, have laid the architectural foundations for their own end-to-end data pipeline. In the years ahead, the focus will be on operationalizing data science models and deploying big data to training them. Machine learning engineers hold a combination of software engineering skills as well as the appreciation for big data statistical modelling techniques. Their role is crucial to the in-production data pipeline as they will be responsible for overseeing data management and computational resources, as well as the maintenance of in-production systems. They must be familiar with agile methodologies in project management as well as the continuous deployment of software components in a modern data stack. The operationalization of the modern data stack is something I look forward to, as this will shift the narrative from ideation to implementation of data science and AI, which will now bring impact and benefit to a variety of business cases.
|
Singapore is the financial hub of South East Asia. Given the number of financial firms that have setup their operations in the country, one wonders how much the organizations there have adopted AI and data science. I was in Singapore last month and spoke to various leaders in the financial industry there. Here’s my interview […]
|
["Deep Tech"]
|
["Advanced Analytics"]
|
Дарья
|
2019-01-04T05:40:47
|
2019
| 1,221
|
["data science", "Go", "machine learning", "AWS", "AI", "RAG", "Aim", "analytics", "R", "Advanced Analytics", "fraud detection"]
|
["AI", "machine learning", "data science", "analytics", "Aim", "RAG", "fraud detection", "AWS", "R", "Go"]
|
https://analyticsindiamag.com/deep-tech/singapore-ai-adoption-its-talent-woes/
| 2
| 10
| 3
| false
| false
| false
|
10,171,183
|
UiPath Partners with HCLTech to Boost AI-Driven Automation for Global Businesses
|
UiPath, a global automation company, has partnered with HCLTech to help companies worldwide automate their operations using AI. The partnership aims to make businesses more efficient by reducing the need for human involvement in everyday processes. HCLTech will use the UiPath Platform to help companies automate areas such as finance, supply chain, procurement, customer service, marketing, and human resources. The tech giant will also provide ready-to-use AI tools to help businesses start and scale their automation efforts easily. The goal is to help organisations become more flexible, improve employee productivity, and get quicker returns on their technology investments. As part of this collaboration, HCLTech and UiPath will set up an AI lab in India. The lab will focus on building industry-specific solutions and small-scale working models that cover everything from planning and implementation to ongoing improvements. HCLTech will also use its global delivery network to support UiPath customers in regions like North America, Europe and Asia-Pacific. Commenting on the partnership, Ashim Gupta, chief operating officer and CFO at UiPath, said, “As we shift towards a new era with agentic AI, agentic automation will be critical to providing businesses with the speed and agility to transform operations and unlock new business potential.” Raghu Kidambi, corporate vice president at HCLTech, added, “Our proven expertise in hyperautomation, AI and cloud-first architectures helps us provide industry-specific and advanced automation solutions at scale.” The UiPath Platform offers specialised Autopilot solutions, such as Autopilot for Business Analysts (that can create business intelligence dashboards) and Autopilot for Developers (that can create app interfaces). Agent Builder makes it easy for businesses to build a path to agentic automation without the need for deep technical knowledge. This new model by UiPath unifies AI agents that make real-time decisions, robots that handle rule-based tasks, and people who guide and oversee the process into a single, intelligent system.
|
The partnership aims to make businesses more efficient by reducing the need for human involvement in everyday processes.
|
["AI News"]
|
["AI"]
|
Shalini Mondal
|
2025-06-03T13:36:15
|
2025
| 307
|
["business intelligence", "Go", "agentic AI", "AI", "RAG", "automation", "Aim", "ViT", "GAN", "R"]
|
["AI", "agentic AI", "Aim", "RAG", "R", "Go", "GAN", "ViT", "automation", "business intelligence"]
|
https://analyticsindiamag.com/ai-news-updates/uipath-partners-with-hcltech-to-boost-ai-driven-automation-for-global-businesses/
| 3
| 10
| 3
| false
| true
| false
|
10,044,550
|
Does Swarm Learning Have An Edge Over Federated Learning?
|
“The world is going to be decentralised, so the majority of data will be created at the edge. That’s the thing we have to solve in the future because, in order to get the value out of that data, you must share it somehow,” said Patrik Edlund, head of communication for Germany, Austria, and Switzerland, Hewlett Packard Enterprise. Swarm Learning is a decentralised machine learning framework that enables organisations to use distributed data to build ML models by leveraging blockchain technology that facilitates the sharing of insights captured from the data rather than the raw data itself. Swarm learning is a biologically inspired artificial intelligence approach based on the behaviour of social insects like ants and bees. Why is swarm learning needed? Traditional machine learning makes use of a data pipeline and a central server that hosts the trained model. The disadvantage is all the datasets are sent to and from the central server for processing. It is time-consuming, expensive and requires a lot of computing power. This communication can also hurt user experience due to network latency, connectivity and so on. In addition, huge datasets need to be sent to one centralised server, raising privacy concerns. Many industries like healthcare depend heavily on data. When projects use their own limited sources of data without sharing or coordinating among organisations, the studies and datasets tend to be small which hinders the true potential of studies. Even teams and businesses fail to use another team’s knowledge because of data ownership attitudes resulting in tedious handling and duplication of datasets. Many times, by law, all data cannot be shared, and it must remain within the closed system of a singular model, which means other researchers will not be able to use that data and build upon it. Every research and development is burdened with data regulatory and privacy headaches. Today, the swarm intelligence approach is mostly leveraged by the healthcare sector, a common use case for conducting research as many organisations share information about diseases and their identification. The approach has been proposed to accelerate the introduction of precision medicine noticeably. In a Nature paper, the central model was compared to a swarm learning model, and researchers found that the resulting accuracy of both models was identical. How swarm protects data According to the International Data Corporation, global data will grow from 33 zettabytes in 2018 to 175 zettabytes in 2025. In swarm learning, the ML method is applied locally at the data source. The approach leverages a decentralised ML approach. It makes use of edge computing, blockchain-based peer-to-peer networking and coordination without any need for one central server to process data. AI modelling is done by the devices locally at the edge (source of the data), with each node building an independent AI model of their own. The network amplifies intelligence with real-time systems with feedback loops that are interconnected. Source: Swarm Learning as a privacy-preserving machine learning approach for disease classification (research) Enhanced federated learning Federated learning also works on a similar principle. The term was first introduced in Google AI’s 2017 blog. Federated learning, however, requires a central server that coordinates the participant nodes and receives model updates. Centralized federated learning “The networking and communication bottleneck is still one of the key issues in federated learning due to frequent interactions between the central server and the clients,” said Mehryar Mohri, head of the Learning Theory Team at Google. However, because AI training in swarm learning is done at the edge using the compute available on the clients, the back and forth to a central control is removed. In swarm, central authority is substituted by a smart contract executing within the blockchain. Each node updates the blockchain, which then triggers a smart contract to execute an Ethereum VM code. This puts together all learning and constructs a model embedded in the Ethereum blockchain. Swarm in action Some companies have begun leveraging Swarm intelligence. For example, Italian startup Cubbit has developed a distributed technology for cloud storage that uses swarm intelligence to deliver speed and privacy, with each Cubbit Cell acting like a node in a swarm. Moreover, the maintenance of these systems costs much less as compared to traditional data centers. Dutch company DoBots specialises in swarm robotics. The company’s project FireSwarm consists of a group of UAVs that specialise in finding dune fires. German startup Brainanalyzed enables scaling profits and predicting market movements for fintech customers. It combines swarm intelligence with data analytics to improve financial decision making. Also read: How This Startup Is Using Swarm AI To Make Deep Learning Technology Accessible For Everyone Swarm learning is also used for staff scheduling. The Particle Swarm Optimization (PSO) is used to solve nurse rostering problems. Ant Colony Optimization (ACO) can be used for vehicle routing problems. The Artificial Bee Colony (ABC) is a use case for group formation and task allocation in rescue robots. The swarm learning code is available and can be downloaded from HPE, and the code for models and data processing used in the Nature paper analysis can be found at Github.
|
According to the International Data Corporation, global data will grow from 33 zettabytes in 2018 to 175 zettabytes in 2025.
|
["AI Features"]
|
[]
|
Prajaktha Gurung
|
2021-07-28T11:00:00
|
2021
| 846
|
["federated learning", "machine learning", "artificial intelligence", "AI", "ML", "RAG", "deep learning", "analytics", "edge computing", "R"]
|
["AI", "artificial intelligence", "machine learning", "ML", "deep learning", "analytics", "federated learning", "RAG", "edge computing", "R"]
|
https://analyticsindiamag.com/ai-features/does-swarm-learning-have-an-edge-over-federated-learning/
| 3
| 10
| 4
| false
| true
| true
|
10,093,223
|
Is the Big Data Lake Era Fading?
|
Data lakes have become an indispensable part of any modern data infrastructure due to their varied benefits. Owing to their ability to store large amounts of raw and unstructured data while providing democratic and secure access has made it a favourite of enterprises. Estimates put the CAGR of the data lakes market at about 24.9%, with a predicted market size of $17.6 billion by 2026. With the bombastic growth that this market is seeing, it is no surprise that enterprises are finding new use cases for data lakes —— moving away from monolithic data lakes to domain-specific data lakes. Why monolithic data lakes are bad Data lakes undoubtedly offer benefits over the previous, more traditional approach of handling data, like ERP and CRM softwares. While the previous approach is more like small, self-owned, self-operated stores, data lakes can be compared to Walmart, where all the data can be found in a single place. However, as the technology matures, enterprises are finding that this approach also comes with its set of drawbacks. Without proper management, large data lakes can quickly become data swamps — unmanageable pools of dirty data. In fact, there are 3 paradigms in which data lakes can fall apart, namely complexity, data quality, and security. Flexibility is one of the biggest pros of maintaining a data lake, as they are large dumps of raw data in their native format. This data is also not stored in a hierarchical structure, instead using a flat architecture to store data. However, with this flexibility also comes with added complexity, meaning that talented data scientists and engineers need to trawl through this data to derive value out of it. This cannot be done without specialised talent to maintain and operate it. This leads into our next issue — data quality. Operating and sifting through a data lake consumes lots of time and resources, requiring constant data governance. If this is ignored, the data lake will become a data swamp, with lots of new data not being properly labelled or identified. Metadata is the key to a good data lake, and this requires constant governance. Due to the centralised nature of data lakes, security becomes a concern with the number of teams that are accessing them. Access control is one of the most important facets of maintaining a data lake, as well as providing the right set of data to the right teams. If this is not done properly, sensitive data might become prone to leaks. Even as data lakes have these drawbacks, their positive impact is undeniable. Their scalability, cost savings, and functionality are their biggest selling points. However, there is a way to get the best of both worlds — moving away from a monolithic data lake to various smaller, distributed data lakes. A data monolith vs. a data mesh As data lakes scale up, these issues have become more prominent, prompting companies to move to smaller, domain-specific data lakes. Termed as a data mesh, this is more of an organisational approach leveraging the benefits of data lakes with few of their drawbacks. In a typical data lake, all of the organisation’s data is ingested into one platform, and then cleaned and transformed. This represents a move away from domain oriented data ownership to one that is more agnostic to the domains, creating a centralised monolith. Creating a data mesh bypasses these limitations, going back to domain oriented data ownership while maintaining the benefits of data lakes. Instead of providing a centralised repository that various teams will access through access control, teams can take ownership of the data created in their domains. This approach not only reduces the amount of maintenance required for the overall data lake, but also gives democratised access to specific domains, allowing them to take charge of their data. Data mesh bypasses the issues posed by monolithic data lakes. Data security becomes less of an issue when compared to a monolithic structure, as teams only access the data they need to, as opposed to controlled access to all the data. Complexity is also reduced, making it easier for data concierges to handle and manage the data. Managing data quality also becomes easier, as the smaller the data lake is, the lesser the likelihood of it becoming a data swamp. However, even as smaller data lakes exist, it is important for them to be built upon an existing big data architecture to allow for cross-domain data access. Even with the benefits data mesh offers, it is important to note that these benefits will only be seen as the data needs of a company scale up. At smaller scales, the benefits offered by data mesh will be outweighed by the benefits offered by a centralised data lake. As with any data architecture, companies must test what works for their specific use-cases.
|
Monolithic data lakes come with their own set of disadvantages, but smaller data lakes might help enterprises at scale.
|
["AI Features"]
|
[]
|
Anirudh VK
|
2023-05-12T15:00:00
|
2023
| 802
|
["big data", "Go", "AI", "data mesh", "Scala", "RAG", "data governance", "data quality", "R", "data lake"]
|
["AI", "RAG", "R", "Go", "Scala", "big data", "data lake", "data mesh", "data governance", "data quality"]
|
https://analyticsindiamag.com/ai-features/is-the-big-data-lake-era-fading/
| 3
| 10
| 2
| true
| false
| false
|
10,162,891
|
India is a Very Important Market for AI, says OpenAI’s Sam Altman in Delhi
|
As part of his global tour, OpenAI CEO Sam Altman is in Delhi today for the company’s DevDay, joined by India’s IT minister Ashwini Vaishnaw and OpenAI’s policy lead Pragya Misra. Altman underlined India’s significance in the global AI ecosystem, also calling it their second biggest market. He clarified that his comment about India’s foundational models two years ago is being taken out of context. “That was a very specific time with scaling laws. But we are now in a world where we have made incredible progress with distillation,” he said, referring to the power of small models and reasoning models. He also said models are still not cheap, but they are doable, and India can be a leader. Back then, Altman had said it was totally “hopeless for India to compete with OpenAI in building foundation models”. Altman still maintained that AI training costs will continue to rise exponentially, but the returns in intelligence and revenue will also grow significantly. These comments come in light of DeepSeek’s rise. According to him, near-term AI models are already reaching the threshold of being good enough to address critical issues like healthcare and education—sectors where India has much to gain from AI innovation. However, he emphasised that the technology is not yet advanced enough to cure cancer or similar diseases. Adding to this, Vaishnaw spoke about how India’s young entrepreneurs are focused on pushing innovation to the next level while keeping costs down. He compared it to the Chandrayaan mission, asking why the same ambition and efficiency couldn’t be brought to developing large language models (LLMs). Altman also spoke about OpenAI’s recent release, deep research, a new capability in ChatGPT that independently conducts multi-step research on the internet. “Deep research can do a single digit percentage of all economic, time consuming tasks. It can make you twice as efficient,” he said. During this trip, Altman is also set to meet Prime Minister Narendra Modi, along with other policymakers and developers. His visit comes at a time when India is ramping up its AI ambitions. Just yesterday, Ola chief Bhavish Aggarwal announced Krutrim AI Lab and the launch of several open source AI models tailored to India’s unique linguistic and cultural landscape. The IndiaAI Mission seeks to build a comprehensive ecosystem that fosters AI innovation by democratising computing access, enhancing data quality, and developing indigenous AI capabilities.
|
During this trip, Altman is expected to meet Prime Minister Narendra Modi.
|
["AI News"]
|
["OpenAI"]
|
Aditi Suresh
|
2025-02-05T13:00:53
|
2025
| 394
|
["Go", "ChatGPT", "OpenAI", "AI", "AWS", "Git", "RAG", "Ray", "foundation models", "R"]
|
["AI", "foundation models", "ChatGPT", "OpenAI", "Ray", "RAG", "AWS", "R", "Go", "Git"]
|
https://analyticsindiamag.com/ai-news-updates/india-is-a-very-important-market-for-ai-says-openais-sam-altman-in-delhi/
| 2
| 10
| 1
| false
| false
| false
|
58,533
|
Why Is Databricks Gaining Traction?
|
In February 2020, Gartner released its magic chart for Data Science. A pleasant surprise was to see Databricks amongst the leaders. Interestingly, it made a swift transition from the visionaries quadrant to the leader within a year. However, it is a well-deserved placement, since Databricks is steadily growing into a major analytics vendor. One might wonder the reason for such growth of the former since giants like Google and Microsoft are in the visionary quadrant while the grand old IBM is still a challenger. The primary reason for Databricks in the leader position is a Unified Analytics platform. This brings us to our first and foremost point: 1.Unified Analytics platform If I was asked one single reason to choose Databricks over anything else, this would be it; the fact that it is a unified analytics platform. If one wishes to build a state of the art analytics system, it will consist of a team of Data Engineers, Data Analysts, Data Scientists and Machine Learning Engineers. The Data Engineers can build cutting edge data pipelines by realizing data architectures like Lambda Architecture and Delta Architecture. Furthermore, Data Analysts can leverage built-in visuals or can connect to Databricks from tools like Power BI to analyze the data, while the Data Scientists can build ML models. Lastly, Machine Learning Engineers can leverage tools like MLflow to manage end to end ML lifecycle. This makes data bricks a one-stop solution to the entire analytics teams as opposed to giant vendors like Microsoft, where multiple services need to be leveraged to build an end to end analytics system. This leads to high coupling and in turn low cohesion, leading to the high cost of integration and maintenance. I admit that we have tools like Azure Synapse Analytics that show a similar promise as databricks. However, it is still in its nascent stages. Nonetheless, what makes Databricks such a versatile platform? The answer is simplified Apache Spark! 2. Apache Spark simplified I can clearly remember the days when installing spark was a nightmare. Spinning up a spark cluster on cloud services like Azure HDInsight wasn’t easy as well. However, with Databricks, creating and leveraging a spark cluster is a matter of a few clicks. Furthermore, cloud hosting on AWS and Azure has made it accessible very easily. However, a key advantage in Databricks is the feature of autoscaling in Databricks. With that, the scaling of clusters is done automatically based on the compute requirements. This reduces operational and maintenance costs to a great extent. 3. Multilanguage and Multiple platform support Since Databricks is based on Spark, all the benefits of apache-spark, the modern-day, in-memory distributed computing platform, are included naturally. For instance, the multi-language support of Spark can be leveraged by default. Hence, as of now, four programming languages viz. Python, Scala, R and SQL form the core of the platform. However, a key advantage that Databricks offers is language interoperability. This comes typically handy when traditional ETL developers move to Big Data environments like databricks. For instance, a developer might read the data from a data store into a pyspark data frame, leverage all the power Spark SQL and convert the result of his SQL query into a pyspark data frame for writing it back to a datastore. This helps us leverage the best of both the traditional and big data world. Moreover, we have a host of Data Engineers and Data Scientists who are comfortable with a particular toolset. For example, a popular ETL tool viz. Informatica has thousands of developers. These developers are candidate data engineers. Hence, in order to facilitate their smooth transition to the big data world while allowing them to retain their skillset, Databricks has partnered with Informatica for data ingestion into delta lakes. More details here. Similarly, MATLAB is a famous tool for creating models. However, it has its own language and environment making it difficult for its users to migrate to spark. Hence, databricks has come up with MATLAB integration, thus bringing out the best of the two tools. Although this is in preview, it holds a lot of promise. 4. Rich Notebooks and Dashboards The icing on the cake is a rich UI experience of Databricks. We know that the usage of Notebooks has risen amongst the Data Science and Data Engineering community exponentially. Databricks gives us the same Notebook experience along with rich visualization embedded into it. This gives an extra appeal to Data Scientists since they can skip some code to create visuals for data analysis. Conclusion With all the above advantages, is it a surprise that Databricks has made it to the coveted position?
|
In February 2020, Gartner released its magic chart for Data Science. A pleasant surprise was to see Databricks amongst the leaders. Interestingly, it made a swift transition from the visionaries quadrant to the leader within a year. However, it is a well-deserved placement, since Databricks is steadily growing into a major analytics vendor. One might […]
|
["AI Features"]
|
["informatica", "Why is Python so Popular"]
|
Prasad Kulkarni
|
2020-03-18T10:00:34
|
2020
| 770
|
["data science", "machine learning", "informatica", "AWS", "AI", "ML", "distributed computing", "Why is Python so Popular", "RAG", "analytics", "MLflow", "Azure"]
|
["AI", "machine learning", "ML", "data science", "analytics", "MLflow", "RAG", "AWS", "Azure", "distributed computing"]
|
https://analyticsindiamag.com/ai-features/why-is-databricks-gaining-traction/
| 3
| 10
| 0
| false
| true
| false
|
10,068,462
|
Methods to Serialize and Deserialize Scikit Learn and Tensorflow models for production
|
There are various methods to put trained machine learning models into production, and one such way to check the readiness of developed machine learning models for production is only Serialization and Deserialization. This technique enables certain parameters of the developed machine learning models to be altered accordingly in the production phase and proceed with model deployment. So this article basically briefs about the best methods to serialize and deserialize machine learning and deep learning models for production or transmission. Table of Contents Overview and the necessity for serialization and deserializationCase study for scikit learn modelsPickle formatJoblib formatCase study for TensorFlow modelsHDF5 (h5) formatJSON formatSummary Let’s first discuss why we need serialization and deserialization while deploying the model. Overview and Necessity of serialization and deserialization Serialization is the process of transforming the overall data structure or the trained model in the form of flexible formats of retrieval to later facilitate the decomposition of the serialized model into production. The method of decomposing the trained model in the production environment can also be termed Deserialization. So the process of serialization and deserialization is necessary to obtain models ready for production or final deployments and also to obtain reliable models delivered in production. There are various formats in which machine learning and deep learning models respectively can be serialized and deserialized. Some of the standard forms for machine learning models include pickle, joblib, and JSON (JavaScript Object Notation) primarily, and with respect to deep learning models, there are various types such as HDF5(h5) (Hierarchical Data Format), YAML (Yet Another Markup Language) and JSON supported from Keras. So according to the computational requirements and ease of access and storage respective data formats are opted where the JSON and the YAML formats turn out to be more human interpretable and flexible. Case Study for scikit learn models Serialization and Deserialization of machine learning models can be done in three forms basically as mentioned earlier. They are the pickle format, joblib format, and JSON format. Let’s explore each of the types individually. Steps involved in Pickle Serialization and Deserialization Usage of the pickle module is generally termed pickling, wherein the trained model parameters will be converted into a byte stream and later deserialized in the production end. The steps involved in pickling machine learning models are shown below. Import the required module for pickling into the working environment as shown below. import pickle If the module is not available in the working environment it can easily be installed using the pip command in the same working environment as shown below and later import the same module to the working environment. !pip install pickle import pickle Later a random name can be given for the pickle filename and a print statement can be mentioned to verify for successful serialization of the machine learning model to the working environment as shown below. dtc_pkl_filename="DTC_Model.pkl" print('Pickle model serialized to the working environment') If the serialization of the trained model is successful we would yield the below output. The inbuilt function of dump has to be used as shown below to open the serialized pickle file and perform the required write operations, here mainly write binary (wb) is used to ensure that the file is accessible to perform write operations in it in binary format as pickling process also supports binary format storage. The code for the same is given below. with open(dtc_pkl_filename, 'wb') as file: #wb stands for write binary and open is used to serialize the model pickle.dump(dtc_model, file) # dump is used to serialize the ML model Now the load module of pickle is used to deserialize the dumped pickle file in the same working environment as shown below where read binary (rb) is used for deserialization of the trained model and also to evaluate various parameters of the model. The complete steps of deserialization and the usage of the deserialized model for evaluating certain parameters are shown below. with open(dtc_pkl_filename, 'rb') as file: ## rb stands for read binary which is used to deserialize the ML model from the working environment des_dtc_model = pickle.load(file) ## load is used to bring the ML model from the pickle format to the working environment Successful deserialization of the model to the working environment can be done using a print statement as shown below. print('Pickle model deserialized to the working environment') The above print statement generates the following output. Later using the deserialized model the accuracy score can be computed as shown below. acc_score=des_dtc_model.score(X_test,Y_test) The deserialized model in this case study has yielded an accuracy score of 70.99% as shown below. print('Accuracy score of Deserialized Model is',acc_score*100) Pickling of machine learning models is easier but sometimes due to the inability of easier human interpretation pickling process is condemned in production and other formats are opted. Steps involved in joblib serialization and deserialization Joblib is an extension of the pickle module and it is used instead of a pickle due to its lighter data type transformation in the pipeline and its inbuilt faster processing for various data types. The joblib module can be made available in the working environment as shown below. pip install joblib import joblib Later a random name for the joblib file can be declared as shown below. dtc_joblib_file = "DTC_Model.pkl" Now the process of serializing the joblib file can be done by using the inbuilt function of dump similar to pickle to serialize the file format to the working environment and the successful serialization of the joblib file can be evaluated as shown below. joblib.dump(dtc_model, dtc_joblib_file) print('Joblib model serialized to working environment') Now the trained model can be deserialized to the working environment as shown below using the load module similar to pickle. The steps involved in deserialization are shown below. DTC_joblib_model = joblib.load(dtc_joblib_file) ## load is used to deserialize the ML model to the working environment The successful deserialization can be validated as shown below with the corresponding output generated. print('Joblib model deserialized to working environment') DTC_joblib_model Later the deserialized model was used to evaluate the accuracy score along with the output generated as shown below. acc_score_joblib=DTC_joblib_model.score(X_test,Y_test) print('Accuracy score of Deserialized Joblib Model is',acc_score_joblib*100) However, the joblib module is an extension of the pickle but it is opted over pickle for its computational and storage benefits. In the next section, let’s explore the steps involved in serializing and deserializing ML models in JSON format. Case study for Tensorflow models Unlike the process of serialization and deserialization of machine learning models, deep learning models have their respective formats of serialization and deserialization. For deep learning models, substantial support is provided by Keras for formats like hdf5 (h5), YAML, and JSON. The steps involved for the same are briefly discussed below. HDF5 (h5) format of serialization and deserialization First, the necessary module has to be imported from Keras as shown below. from tensorflow.keras.models import load_model Now the deep learning model developed can be serialized into the working environment and the serialization of the deep learning model also can be checked as shown below. model.save('serialized_dl_model.h5') ## Serializing the model print('Model serialized to the disk') Now the model can be deserializable into the working environment as shown below. des_model=load_model('serialized_dl_model.h5') ## Deserializing the model print('Model deserialized to the disk') Now the deserialized deep learning model can be used for the evaluation of certain parameters. The steps involved to evaluate the model testing loss and the testing accuracy is shown below. print('Testing loss {} \n Testing accuracy {}'.format(des_model.evaluate(X_test,Y_test)[0],des_model.evaluate(X_test,Y_test)[1])) From the implementation end, hdf5 (h5) format based implementation is easier, but as h5 format stores complex data in hierarchical format it sometimes has concerns with respect to optimal convergence and faster processing. JSON format of serialization and deserialization Now let us explore the steps involved in serialization and deserialization of deep learning models in JSON format. Unlike other formats, the necessary library has to be imported from the Keras module as shown below. from tensorflow.keras.models import model_from_json Once the necessary module is imported the deep learning model is serialized using the to_json() module as shown below. json_model=model.to_json() ## Serializing the deep learning model to the disk Now the model has to be deserialized by first creating the JSON file in the readable format as shown below. json_file=open('json_model','r') ## r is used to read the json_file opened Now the created JSON file has to be loaded by utilizing the read() inbuilt function and closed accordingly as shown below. loaded_json_model=json_file.read() json_file.close() Now the deserialized model is loaded into the working environment using the library model_from_json of Keras as shown below. des_json_model=model_from_json(loaded_json_model) Now the deserialized deep learning model has to be compiled again in the same working environment accordingly and later proceeded to evaluate the needed parameters. Here the steps involved in evaluating the deserialized models’ test performance are shown below. print('Testing loss {} \n Testing accuracy {}'.format(des_json_model.evaluate(X_test,Y_test)[0],des_json_model.evaluate(X_test,Y_test)[1])) In the above image we can see that post deserialization of the deep learning model the models loss has increased and the accuracy has decreased in the test phase which is showing signs of unreliable model along with poor performance for changing data and this is how the process of serialization and deserialization helps us to cross-validate the model’s performance in productions. Summary As mentioned earlier, machine learning and deep learning models have their respective formats of serialization and deserialization along with some associated pros and cons. The below table gives a glimpse of certain parameters and concerns associated with the individual types. Scikit-learn model concerns and parametersTensorflow model concerns and parametersThe pickle format is easy to implement but it is not human interpretable and shows signs of slower convergence in production.The h5 format is easy to implement. But due to its hierarchy of storage, it may show concerns for complex models and huge data processing.The joblib format is an extension of pickle and opted over pickle due to its faster convergence along with various data types.The YAML format serialization appears to be no longer supported by Keras due to security concerns.The JSON format is prone to yield attribute errors and it is hard to deserialize certain attributes and it is easily manipulable and interpretable by humans.The JSON format is easy to implement but the deserialization model showed higher loss and lower accuracy in testing which would be a concern during production. References Link to notebookTensorflow Official DocumentationTensorflow colab notebooks
|
This article briefs about the various methods to serialize and deserialize Scikit Learn and Tensorflow models for production
|
["AI Trends"]
|
["Data Science", "Deep Learning", "Keras", "Machine Learning", "Python", "Tensorflow"]
|
Darshan M
|
2022-06-07T11:00:00
|
2022
| 1,711
|
["scikit-learn", "machine learning", "Keras", "TPU", "AI", "ML", "Machine Learning", "RAG", "Python", "Colab", "deep learning", "Deep Learning", "Data Science", "TensorFlow", "Tensorflow"]
|
["AI", "machine learning", "ML", "deep learning", "TensorFlow", "Keras", "scikit-learn", "Colab", "RAG", "TPU"]
|
https://analyticsindiamag.com/ai-trends/methods-to-serialize-and-deserialize-scikit-learn-and-tensorflow-models-for-production/
| 2
| 10
| 1
| true
| false
| false
|
10,077,134
|
How Analytics Has Made Retail ‘Personalised, Frictionless, and Faster’
|
The retail industry may appear simple from the outside but has tremendous depth and science under the hood, and this is what drew Venkat Raghavan, the associate director and global head of analytics at Tesco Business Services, to this domain. Analytics India Magazine caught up with Raghavan to understand the field better and discuss what role analytics plays here. AIM: What are the main responsibilities of an analytics leader? Venkat Raghavan: I believe the core purpose of analytics leaders is to shift organisations from gut-based decision-making to data-driven decision-making. This is a significant cultural transformation, especially for large organisations. Keeping this in mind, I would like to highlight three key responsibilities of an analytics leader: Create access to new possibilities to improve decision-making: Analytics leaders need to nurture and create an environment where decision-making can become faster, deeper, and data-led. Decision-making based on Data Analytics is a “new” form of management that allows us to respond more effectively to these threats and opportunities. Build the capability base to deliver these possibilities: Now that we are laying down the platform of possibilities to improve decision-making, we now have to build the capabilities to execute those possibilities. These capabilities are created through a combination of skills, platforms, and internal and external partnerships. Analytics leaders must define the DNA of talent and create the right environment and enablement for them to be successful. This includes a strong talent value proposition, skilling programmes, and the right toolset for them to unleash their capabilities at work. Deliver tangible economic value addition to the organisation: The arcane aspect of analytics is no longer acceptable. It cannot just be used to provide insights into the business. Analytics leaders need to evolve to ensure that the insights derived from analytics play a critical role in the organisation’s growth. We need to strive to establish a straight-line connection between the analytics investments and measurable financial impacts around sales, profit, and cash. AIM: What is the role of data science in the world of retail technology? Venkat Raghavan: Until 2010, technology was focused on improving systems to enrich the experience for customers and colleagues. For example, in retail, the role of technology predominantly hinged around creating a process of searching and shopping for customers or buying, stocking, and transacting for retailers. Post-2010, there has been a paradigm shift towards making the experience more personalised, frictionless, and faster—thanks to the critical role that data science played in this transformation. It has derived the retail-driven data from the system, analysed the patterns and behaviours by answering the pertinent questions, and hence built actionable insights. In the last five years, the relationship between the functional system and data science has become more symbiotic, as illustrated in the diagram below: AIM: What are the challenges faced while implementing innovations in technology products? Venkat Raghavan: Introducing technological change or bringing in a new set of innovations present different challenges to the organisation. One of the key challenges lies in striking the right balance between speed and perfection as a process to scale up. An organisation is expected to ensure business continuity and secure information and intellectual property; on the other hand, it has to facilitate the execution of tech innovation and ensure a speedier go-to-market. It is a tough choice between speed and quality. The second challenge lies in deciding between an organisation’s ‘build’ versus ‘buy’ strategy. The COVID-19 pandemic has accelerated the business adoption of digital technologies as never seen before. This need for speed has raised the dilemma of which technology to buy from the market and which one to build in-house. While the traditional view has always been not to reinvent the wheel, the truth is that every resilient business is like a snowflake and keeps changing. The technology or software you bought was tailor-made for a particular problem. It may not be able to customise itself adequately for future challenges. Last but not least is the challenge of precision versus scale. When an organisation begins experimenting with digital enhancement, automation, or artificial intelligence, one truth quickly emerges; transformation is more than just developing new technological solutions and plugging them into the existing environment. To reach their full potential, new technologies need the processes to be redefined and simplified, data flow to be rewired, and the orchestration of work between humans and machines to be re-engineered. This involved a high degree of precision that could help the project or a technology implementation reach its desired outcome. However, organisations also have to ensure that the implementation of innovation is scalable and not saturated in the vicious cycle of improvisation to attain a near-perfect model. AIM: How can retailers leverage the power of data engineering? Venkat Raghavan: Advancements in data engineering have been the single biggest contributor to the analytics and science revolution we are seeing around us today. However, it is a hard-hitting fact that as much as 90% of organisational data is still dark and hidden, as they lie in unstructured formats and systems. Therefore, data engineering still has significant ground to cover to reach its full potential. Also, as opposed to the common belief that data engineering focuses on building storage systems for data, the field serves three critical purposes—data pipelining and storage, compute environment for data scientists, and productionalisation of data science algorithms for scale consumption. It is important to see this holistic value proposition of data engineering to extract its full potential. Data pipelining and storage: An organisation, especially in the retail sector, is likely to deal with a massive amount of consumer and supplier data. To analyse all that data, there is a need for a trusted and comprehensive view of the entire data set. When the data resides in multiple systems and services, it needs to be combined in ways that make sense or an in-depth analysis. Data flow can be unpredictable, and the problem gets magnified as the breadth and scope of the role data plays increases. Hence, a strong data pipeline strategy is required to eliminate most of the manual steps from the process, enable a smooth, automated flow of data from one stage to another, and make the end datasets accessible to consumers. Compute environment for data scientists: Data science is constantly getting more extensive with respect to the nature of problems and the heterogeneity of datasets and algorithms used to solve them. The open-source revolution is creating better possibilities for data scientists to experiment and evolve the craft now more than ever. Therefore, organisations need to constantly improve the computing environment by focusing on the right toolset and processing power to enable data science to unleash new possibilities for organisations. Productionalisation of Data science algorithm for scale consumption: Analytics and science will play a huge role in improving products and services by understanding customers and their needs. These potentially focus on real-time hyper-personalisation of customers, optimising inventory, demand forecasting, the effectiveness of promotions, and their impact on sales. AIM: How has digital transformation strategy changed the company’s top-notch products? Venkat Raghavan: Digital transformation is a never-ending journey, where every step forward accelerates your move towards future steps. We set up our Digital Transformation Centre of Excellence to enable accelerated automation and optimisation of business processes, leverage opportunities offered by new-age technologies, and ensure effective rolling and deployment of these technologies through effective change management. One of our digital transformation initiatives is the introduction of chatbots to enable self-service capabilities for our customers, colleagues, and suppliers. We have also built technology innovations that enhance the experience for our customers both in-store and while shopping online. From providing the right range of products in the stores, enabling data-driven models for space utilisation, helping with planograms and display to improving the store colleague experience, product lifecycle management, and traded planning by using the cloud, blockchain, mixed reality [AR/VR], mobile development, IoT and Computer Vision. Our technology team is also instrumental in developing other solutions such as robotic picking, urban fulfilment centres, and ‘Whoosh’, the new superfast food delivery solution for customers.
|
We need to strive to establish a straight-line connection between the analytics investments and measurable financial impacts around sales, profit, and cash.
|
["AI Features"]
|
["Interviews and Discussions", "retail", "tesco"]
|
Shraddha Goled
|
2022-10-13T11:00:00
|
2022
| 1,335
|
["data science", "artificial intelligence", "AI", "chatbots", "Snowflake", "tesco", "computer vision", "RAG", "Aim", "analytics", "retail", "R", "Interviews and Discussions"]
|
["AI", "artificial intelligence", "computer vision", "data science", "analytics", "Aim", "RAG", "chatbots", "Snowflake", "R"]
|
https://analyticsindiamag.com/ai-features/how-analytics-has-made-retail-personalised-frictionless-and-faster/
| 3
| 10
| 4
| true
| true
| true
|
23,710
|
Reliance To Invest $180 Million In AI Education Platform Embibe Over Next 3 Years
|
File photo of Aditi Avasthi, founder and CEO at Embibe. Reliance Industries Limited on 12 April 2018 executed definitive agreements to acquire majority equity stake from existing investors of Indiavidual Learning Pvt Ltd (Embibe), a note AI-based education platform which leverages data analytics to deliver personalised learning outcomes to each student. Reliance will further invest upto $180 million into Embibe over the next three years. According to the official press statement, this deal will include consideration for acquiring majority stake from existing investors constituting 72.69 percent of Embibe’s equity on a fully-diluted basis. Embibe will use the capital over the next three years towards deepening its research and development on using artificial intelligence in education, as well as business growth and geographic expansion, catering to students across K-12, higher education, professional skilling, vernacular languages and all curriculum categories across India and internationally. The founder and CEO of Embibe, Aditi Avasthi will continue in her leadership role and will drive the growth of the business. Speaking on this strategic transaction, Akash Ambani, director, Reliance Jio, said in a press statement, “The investment in Embibe underlines Reliance’s commitment to growing the education sector in India and the world and making education accessible to the widest possible group of students by deploying technology. Reliance aims to connect over 1.9 million schools and 58,000 universities across India with technology. We are delighted to announce this partnership with Embibe, and believe that their highly experienced management team will be instrumental in enabling Reliance to realise its vision for the education sector, and strengthening Jio’s leadership position as a digital technology company.” Aditi Avasthi, founder and CEO at Embibe, said “Embibe’s team has built an incredible technology platform that can deliver personalised learning outcomes in a way that is truly scalable across all education markets. With robust AI stacks focused on content intelligence and automation, behavioural recommendations and student intelligence, our products have redefined the way edtech can impact the lives of students and teachers. We are supercharging our platform with the ability to deliver both content and outcomes for every learning goal in every student’s journey, to be the leader in personalising education for India and the world. We are excited to partner with Jio – bringing unrivalled acceleration to our growth story through data and device access. Most of all, we are delighted to partner with Reliance and share their deep conviction and visionary passion to sow the seeds of a new India with data as the new soil.” The transaction is subject to customary closing conditions.
|
Reliance Industries Limited on 12 April 2018 executed definitive agreements to acquire majority equity stake from existing investors of Indiavidual Learning Pvt Ltd (Embibe), a note AI-based education platform which leverages data analytics to deliver personalised learning outcomes to each student. Reliance will further invest upto $180 million into Embibe over the next three years. […]
|
["AI News"]
|
["AI education", "edtech", "Reliance Industries"]
|
Prajakta Hebbar
|
2018-04-16T06:40:08
|
2018
| 423
|
["Go", "API", "artificial intelligence", "AI education", "AI", "edtech", "Scala", "Git", "RAG", "Aim", "analytics", "R", "Reliance Industries"]
|
["AI", "artificial intelligence", "analytics", "Aim", "RAG", "R", "Go", "Scala", "Git", "API"]
|
https://analyticsindiamag.com/ai-news-updates/reliance-invests-embibe-180-million/
| 2
| 10
| 2
| false
| false
| false
|
10,020,331
|
Guide to VISSL: Vision Library for Self-Supervised Learning
|
VISSL is a computer VIsion library for state-of-the-art Self-Supervised Learning research. This framework is based on PyTorch. The key idea of this library is to speed up the self-supervised learning process from handling a new design to the evaluation part, VISSL does it all. Following are the characteristic of VISSL framework: Reproducibility: It provides a reproducible implementation of state-of-the-art self-supervised learning methods. The algorithms SwAV, SimCLR, MoCo(v2), PIRL, NPID, NPID++, DeepClusterV2, ClusterFit, RotNet, Jigsaw are already implemented in this framework.Benchmark suite: Number of benchmark tasks included in VISSL are linear image classification (places205, imagenet1k, voc07), full finetuning, semi-supervised benchmark, nearest neighbor benchmark, object detection (Pascal VOC and COCO).Ease to use: VISSL uses a yaml configuration system which is based on Hydra.Modularity: Since VISSL uses YAML configuration, which contains all modular components for ease of usability and reproducibility.Scalability: It can easily train models on 1-GPU, multi-GPU and multi-node. Model Zoo: It provides a large set of pre-trained models available at VISSL Model ZOO. Requirements LinuxPython>=3.6.2 and <3.9PyTorch>=1.4torchvision (matching PyTorch install)CUDA (must be a version supported by the PyTorch version)OpenCV Installation For google colab notebook, following are the instructions to install VISSL. Install the dependencies. !pip install torch==1.5.1+cu101 torchvision==0.6.1+cu101 -f https://download.pytorch.org/whl/torch_stable.html # install opencv !pip install opencv-python !pip install apex -f https://dl.fbaipublicfiles.com/vissl/packaging/apexwheels/{version_str}/download.html Then install VISSL via pip. !pip install vissl # verify installation !python -c 'import vissl' Check this link for other methods of installation. Quick Start with VISSL This quick-start demo will show the training with VISSL framework and YAML configuration. Before getting started with the training part, let us discuss YAML config files provided by VISSL. VISSL uses Hydra for configuration management. All the YAML files provided by it are available here. For this demo, we are going to use YAML config file for training ResNet-50 supervised model on 1-GPU, which can be downloaded from here or, !mkdir -p configs/config/ !wget -O configs/__init__.py https://dl.fbaipublicfiles.com/vissl/tutorials/configs/__init__.py !wget -O configs/config/supervised_1gpu_resnet_example.yaml https://dl.fbaipublicfiles.com/vissl/tutorials/configs/supervised_1gpu_resnet_example.yaml To understand the YAML file in more detail, check this link. For training purposes, VISSL provides a helper tool which can do the feature extraction and training on VISSL. This helper tool is made in such a way that it can do training on 1-GPU or multi-GPU and even provide a distributed environment for training. The file can be downloaded as: !wget https://dl.fbaipublicfiles.com/vissl/tutorials/run_distributed_engines.py Create a custom dataset for training ResNet-50, you can take an ImageNet dataset also. The code for it is available here. For using custom data with VISSL, we have to register it in VISSL(providing metadata and path to the dataset). For this, we create a simple JSON file with the metadata and save it to `configs/config/dataset_catalog.py` file. json_data = { "dummy_data_folder": { "train": [ "/content/dummy_data/train", "/content/dummy_data/train" ], "val": [ "/content/dummy_data/val", "/content/dummy_data/val" ] } } # use VISSL's api to save or you can use your custom code. from vissl.utils.io import save_file save_file(json_data, "/content/configs/config/dataset_catalog.json") You can verify whether the dataset is registered with VISSL by following commands: from vissl.data.dataset_catalog import VisslDatasetCatalog # list all the datasets that exist in catalog print(VisslDatasetCatalog.list()) # get the metadata of dummy_data_folder dataset print(VisslDatasetCatalog.get("dummy_data_folder")) Train the model by the following command. !python3 run_distributed_engines.py \ hydra.verbose=true \ config=supervised_1gpu_resnet_example \ config.DATA.TRAIN.DATA_SOURCES=[disk_folder] \ config.DATA.TRAIN.LABEL_SOURCES=[disk_folder] \ config.DATA.TRAIN.DATASET_NAMES=[dummy_data_folder] \ config.DATA.TRAIN.DATA_PATHS=[/content/dummy_data/train] \ config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=2 \ config.DATA.TEST.DATA_SOURCES=[disk_folder] \ config.DATA.TEST.LABEL_SOURCES=[disk_folder] \ config.DATA.TEST.DATASET_NAMES=[dummy_data_folder] \ config.DATA.TEST.DATA_PATHS=[/content/dummy_data/val] \ config.DATA.TEST.BATCHSIZE_PER_REPLICA=2 \ config.DISTRIBUTED.NUM_NODES=1 \ config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \ config.OPTIMIZER.num_epochs=2 \ config.OPTIMIZER.param_schedulers.lr.values=[0.01,0.001] \ config.OPTIMIZER.param_schedulers.lr.milestones=[1] \ config.TENSORBOARD_SETUP.USE_TENSORBOARD=true \ config.CHECKPOINT.DIR="./checkpoints" The trained model is available at checkpoints/model_final_checkpoint_phase2.torch. This command will dump all the training logs, checkpoints and metrics in ./checkpoints directory. In the above command, config=supervised_1gpu_resnet_example : defines the config file for supervised training.config.DATA.TRAIN.DATA_SOURCES=[disk_folder] config.DATA.TRAIN.LABEL_SOURCES=[disk_folder] : define the data source for train. In this case, it is disk_folder.config.DATA.TRAIN.DATASET_NAMES=[dummy_data_folder] : define the dataset name i.e. dummy_data_folder. We registered this dataset above.config.DATA.TRAIN.DATA_PATHS=[/content/dummy_data/train] : another way of specifying where the data is on the disk. If you are using ImageNet dataset, specify the path as /path/to/my/imagenet/folder/train.config.DATA.TEST.DATA_SOURCES=[disk_folder] config.DATA.TEST.LABEL_SOURCES=[disk_folder] config.DATA.TEST.DATASET_NAMES=[dummy_data_folder] : specify the paths for Test dataset. Similar to train dataset.config.DATA.TEST.DATA_PATHS=[/content/dummy_data/val] : another way of specifying where the data is on the disk. config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=2 config.DATA.TEST.BATCHSIZE_PER_REPLICA=2 : specify the usage of resources i.e., 2 img per gpu to use for both TRAIN and TEST.config.DISTRIBUTED.NUM_NODES=1 config.DISTRIBUTED.NUM_PROC_PER_NODE=1 : setting for distributed training. In this example we have stated gpu as 1 and machine as 1.config.OPTIMIZER.num_epochs=2 config.OPTIMIZER.param_schedulers.lr.values=[0.01,0.001] config.OPTIMIZER.param_schedulers.lr.milestones=[1] : specify epochs=2 and drop learning rate after 1 epoch. You can check the full demo here. Conclusion In this article, we have discussed VISSL framework and its basics. All the advanced tutorials are available at this link. Colab Notebook VISSL Demo Official codes, docs & Tutorials are available at: GithubWebsiteDocs
|
VISSL is a computer VIsion library for state-of-the-art Self-Supervised Learning research. This framework is based on PyTorch. The key idea of this library is to speed up the self-supervised learning process from handling a new design to the evaluation part, VISSL does it all. Following are the characteristic of VISSL framework: Reproducibility: It provides a […]
|
["Deep Tech"]
|
["Computer Vision", "self supervised learning"]
|
Aishwarya Verma
|
2021-02-16T10:15:00
|
2021
| 756
|
["CUDA", "AI", "PyTorch", "ML", "computer vision", "OpenCV", "Colab", "Python", "object detection", "Computer Vision", "R", "self supervised learning"]
|
["AI", "ML", "computer vision", "PyTorch", "Colab", "OpenCV", "object detection", "CUDA", "Python", "R"]
|
https://analyticsindiamag.com/deep-tech/guide-to-vissl-vision-library-for-self-supervised-learning/
| 4
| 10
| 0
| true
| true
| false
|
10,125,552
|
Anthropic CEO Says Poorly Managed AI Systems Could ‘Undermine’ Democracy
|
After wooing consumers and enterprises with its latest model, Claude Sonnet 3.5, Anthropic is extending its services to the US government and public sector in partnership with Amazon Web Services (AWS). Soon, the company is also looking to make Claude 3 Haiku and Claude 3 Sonnet available in AWS Marketplace, specifically for the US Intelligence Community (IC), and in AWS GovCloud. “We are making Claude available for applications like combating human trafficking, rooting out international corruption, identifying covert influence campaigns, and issuing warnings of potential military activities,” said Anthropic’s chief executive Dario Amodei in an exclusive interview with AIM on the sidelines of the AWS Summit 2024 in Washington, DC. “I think it’s just really important that we provide these services well. It makes democracy as a whole more effective, and if we provide them poorly, it undermines the notion of democracy,” he said. Amodei believes that what distinguishes Anthropic from OpenAI and other companies is the “concept of Constitutional AI (CAI)”. Anthropic’s CAI trains AI systems to align with human values and ethics, drawing on high-level principles from sources like the UN Declaration of Human Rights and ethical guidelines. In the near future, the company plans to provide custom constitutions for specific constituencies, or services that require specific information. Amodei added that Anthropic wants to help the US government and its citizens by providing them with a tool to easily access information related to voting or healthcare services. “Anthropic, AWS and Accenture recently worked with the DC Department of Health to power a chatbot that allows residents to ask natural language questions about things like nutrition services, vaccinations, schedules, and other types of simple health information,” he said. When discussing cloud security, he emphasised that AWS has a proven track record of providing government customers with world-class security solutions. “AI needs to empower democracy and allow it to be both better and remain competitive at all stages,” he said, adding that the government can use Claude to improve citizen services, enhance policymaking with data-driven insights, create realistic training scenarios, and streamline document review and preparation. Responsible AI Matters The founder of Anthropic has always been in favour of regulating AI. “AI is a very powerful technology, and our democratic governments do need to step in and set some basic rules of the road. We’re getting to a point where the amount of concentration of power can be greater than that of national economies and national governments, and we don’t want that to happen,” he said in a recent podcast. Considering the US elections are supposed to happen later this year, Anthropic has introduced an Acceptable Use Policy (AUP) that prohibits the use of their tools in political campaigning and lobbying. This means candidates are not allowed to use Claude to build chatbots that can pretend to be them, and the company doesn’t allow anyone to use Claude for targeted political campaigns. Anthropic has been working with government bodies like the UK’s Artificial Intelligence Safety Institute (AISI) to conduct pre-deployment testing of their models. OpenAI Lobbying the US Government OpenAI’s chief technology officer Mira Murati said during a recent interview that the company gives the government early access to new AI models, and they have been in favour of more regulation. “We’ve been advocating for more regulation on the frontier, which will have these amazing capabilities but also have a downside because of misuse. We’ve been very open with policymakers and working with regulators on that,” she said. Notably, OpenAI has been withholding the release of its video generation model Sora, as well as the Voice Engine and voice mode features of GPT-4o. It is likely that OpenAI might also release GPT-5 post-elections. Earlier this year, Murati confirmed that the elections were a major factor in the release of GPT-5. “We will not be releasing anything that we don’t feel confident on when it comes to how it might affect the global elections or other issues,” she said. Meanwhile, OpenAI recently appointed retired US Army General Paul M Nakasone to its board of directors. As a priority, General Nakasone joined the board’s Safety and Security Committee, which is responsible for making recommendations to the board on critical safety and security decisions for all OpenAI projects and operations. Reality: OpenAI has already won and become a black budget project.GPT6 gov supercomputer probably being built rn https://t.co/VLmFYhivnw— Beff – e/acc (@BasedBeffJezos) July 2, 2024 Meanwhile, OpenAI has been working with the US Defense Department on open-source cybersecurity software — collaborating with the Defense Advanced Research Projects Agency (DARPA) for its AI Cyber Challenge announced last year. In April, OpenAI CEO Sam Altman, along with tech leaders from Google and Microsoft, joined a DHS panel on AI safety to advise on responsible AI use in critical sectors like telecommunications and utilities. Altman has actively engaged with US lawmakers, including testifying before the Senate Judiciary Committee. He proposed a three-point plan for AI regulation, which includes establishing safety standards, requiring independent audits, and creating a federal agency to license high-capability AI models. There is no denying that both OpenAI and Anthropic are trying to win the US government’s favour and contracts. The outcome of these efforts could significantly impact not only their own standings but also the broader adoption and regulation of AI technologies in public sectors.
|
Amodei believes that what distinguishes Anthropic from OpenAI and other companies is the “concept of Constitutional AI (CAI)”.
|
["AI Features"]
|
["Anthropic", "Superintelligence"]
|
Siddharth Jindal
|
2024-07-02T17:38:47
|
2024
| 883
|
["Anthropic", "artificial intelligence", "GPT-5", "AI", "OpenAI", "GPT-4o", "ML", "chatbots", "AWS", "Aim", "Superintelligence"]
|
["AI", "artificial intelligence", "ML", "GPT-4o", "GPT-5", "OpenAI", "Anthropic", "Aim", "chatbots", "AWS"]
|
https://analyticsindiamag.com/ai-features/anthropic-ceo-says-poorly-managed-ai-systems-could-undermine-democracy/
| 2
| 10
| 2
| true
| true
| false
|
10,171,934
|
AI Just Killed the Graduate Job Market—Now What?
|
The threat of AI taking over jobs is becoming increasingly real. Amazon CEO Andy Jassy recently told employees that as AI takes on more tasks in the coming years, the nature of work at Amazon will change, potentially reducing the number of corporate roles. “As we roll out more generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs,” he said. This shift is beginning to put tangible pressure on undergraduate students who are already worried about their job prospects. A report from San Francisco-based venture firm SignalFire shows that major tech companies—including Google, Amazon, Apple, Meta, Microsoft, NVIDIA, and Tesla—have cut fresh graduate hiring by over 50% since 2022. The sharp decline has left many students anxious about their careers. In a recent video titled The Truth About AI and Jobs, LinkedIn co-founder Reid Hoffman addressed these concerns. Regarding AI and skills development, Hoffman said that prompting with AI tools like ChatGPT is still in a primitive stage. “Right now, no one is prompting in particularly good ways. We’re all like 5% users,” he said, while advising students to learn not just how to prompt, but how to apply AI to real problems. “The question is not just ‘am I prompting’, but ‘which problems could AI be applied to in ways that are interesting and unique?’” For those feeling discouraged about entering the workforce with limited experience, Hoffman framed AI as a potential advantage rather than a threat. “You are generation AI. You are AI native. That makes you enormously attractive in this environment,” he said. Replying to a student from Oxford, who was unsure about accepting a job she didn’t love, Hoffman said, “Doing a job versus not doing a job is always a better way to ultimately be getting to a much better job two, five, ten years out.” He added that while passion matters, early career choices should be seen as strategic steps. “You’re sometimes mistaught that you should only be looking for what you’re passionate about…The key is to be strategic.” AIM reached out to a few undergraduate students to understand how they view AI. Shrravan K, a third-year BTech mechatronics student at MIT, Manipal, told AIM that, at least for now, he believes AI will definitely create more job opportunities. “When it comes to development, nothing can be said about the distant future just yet,” he added. He made it clear that he isn’t particularly worried, as he believes AI will generate more job opportunities in the tertiary and quaternary sectors. However, he acknowledged that labourers in repetitive or low-supervision roles might lose their jobs, as AI can perform these tasks more efficiently and even correct its own mistakes. Sankalp Gupta, a second-year Physics student at Cornell University, told AIM that he hasn’t observed much concern among physics undergrads about AI taking over jobs, especially in academia, where research and ideation dominate. He believes that generating new ideas, which is a key part of physics research, is still beyond current AI capabilities. “In coming up with new ideas, which is 70% to 80% of research, that is not affected by what AI does,” Gupta said. He added that students mainly use ChatGPT to write and refine code for analysing data. What are Tech Leaders saying? Similarly to what the students mentioned, Infosys founder Narayana Murthy is optimistic about AI and sees it as a driver for the growth of the Indian IT industry, even as concerns about slowing growth and job losses persist. Murthy shared that using tools like ChatGPT has increased his productivity fivefold. “This whole fear that technology will take away jobs is not right. It will create a different kind of job. For example, what I found in using ChatGPT for my speeches was the following: the smartness is in providing the requirement definition for my speech. The smartness is in asking the right question,” he told Moneycontrol in an interview. Hoffman’s statement comes against the backdrop of other industry leaders raising concerns about AI taking over many job roles. In a recent interview with CNBC, Dario Amodei, CEO of Anthropic, issued a stark warning about the potential labour market disruption caused by AI, predicting that up to 50% of entry-level white-collar jobs could disappear in the next five years. He said this shift could push unemployment rates to between 10% and 20% On the other hand, NVIDIA chief Jensen Huang dismissed Amodei’s predictions. While he acknowledged that some jobs may become obsolete, he added that AI will also create new opportunities. “Whenever companies are more productive, they hire more people.” He sees a future where workers act as “HRs of AI agents”, using AI to better their roles rather than replace them. Meanwhile, OpenAI CEO Sam Altman acknowledged that, judging by a historical pattern of technological change, while AI will eliminate some jobs, it will also create some new ones. However, he explained that what’s different now is the unprecedented speed at which this transformation is expected to occur. At the Snowflake Summit 2025, Altman said, “Today AI is like an intern that can work for a couple of hours, but at some point it’ll be like an experienced software engineer that can work for a couple of days.” In a recent podcast, Demis Hassabis, CEO of Google DeepMind, said AI will transform the job market. It will likely follow the historical pattern where new and often better roles emerge alongside disruptive technologies. “New jobs are created that are actually better, that utilise these tools,” he said, referencing past shifts brought about by the internet and mobile. He advised undergraduate students to deeply immerse themselves in understanding using the new AI systems. “It’s still important to study STEM and programming…so that you understand how they’re built,” he said. Hassabis added that those using AI tools could become 10 times more productive. Google CEO Sundar Pichai also thinks the same. According to him, AI won’t take away jobs, but rather make employees more productive. “I expect we will grow from our current engineering phase even into next year, because it allows us to do more,” he said. He added that AI is acting as an “accelerator” by handling repetitive tasks, which allows engineers to focus on more meaningful problems. This rise in productivity, he noted, could lead to more hiring as Google scales product development. What Now? For recent graduates, the key lies in embracing AI as a tool, upskilling in relevant technologies, and honing human-centric skills. While AI may disrupt traditional career paths, it also opens doors to new possibilities, provided the next generation is ready to step through. For instance, Hoffman also touched upon how AI is reshaping specific job roles, such as web design. “The fact that you could produce the ‘website design coding for dummies’ book no longer matters as much, because AI can do all that,” he said. The value now lies in creativity and the ability to align design with broader business objectives. When asked whether college degrees are losing relevance, Hoffman focused on the importance of adaptability. “It’s not necessarily the thing you learned in X101…it’s your capacity to learn.” He encouraged students to prioritise continuous learning over static skill sets. He also stressed the importance of networks. “Life is a team sport, not just an individual sport.” Maintaining peer relationships and building new ones can help students access new opportunities and share resources as the job market evolves.
|
“Today AI is like an intern that can work for a couple of hours, but at some point it’ll be like an experienced software engineer that can work for a couple of days.”
|
["Global Tech"]
|
["AI Jobs"]
|
Siddharth Jindal
|
2025-06-18T16:00:00
|
2025
| 1,261
|
["Anthropic", "ChatGPT", "OpenAI", "AI", "AI Jobs", "RAG", "Ray", "Aim", "generative AI", "R", "Snowflake"]
|
["AI", "generative AI", "ChatGPT", "OpenAI", "Anthropic", "Aim", "Ray", "RAG", "Snowflake", "R"]
|
https://analyticsindiamag.com/global-tech/ai-just-killed-the-graduate-job-market-now-what/
| 2
| 10
| 3
| false
| true
| true
|
31,851
|
This Company Takes LGBTQ Community’s Participation In AI Seriously
|
It has been reported widely that artificial intelligence is prone to biases. Since it is humans who write the algorithms, collect, compile and feed in the data, prejudices have been seen in many AI and machine learning models. Some of these biases are based on gender, race and sexual orientation, among others. This problem of prejudices is not only limited to AI and ML models, but also to workplaces — especially in the bigger corporate entities in India. Decriminalisation Of Sec 377 Now with the decriminalisation of Section 377 — a colonial-era law that banned gay sex — sectors like entertainment and fashion have now embraced gender-based diversity. However, the LGBTQ community has yet to be represented in an objective manner in industries like manufacturing, BFSI, policy and governance, and especially IT and New Tech. Interning With Pride ThoughtWorks, a noted global tech company, just concluded their five-month programme called Interning With Pride which provide persons from the LGBTQ community with upskilling opportunities in the New Tech sector. Speaking to Analytics India Magazine, Tina Vinod, Diversity and Inclusion Head at ThoughtWorks India, said, “The first month of the training covered Java and object-oriented programming concepts. The second month was driving that into practice, after which we had project simulation. The interns were then assigned projects in the third month.” The program helped the interns sharpen their programming skills through hands-on sessions on object-oriented programming practices and principles. It also provided them with exposure to both, agile practices and the latest industry trends. Vinod added, “The LGBTQ community is one of the most marginalised communities in India. In 2004, we sent two of our employees to the US because the political and social climate in India was not safe for gay people. India is such a diverse country, and for a country like ours having such regressive acts and policies is wrong. Now, even after the verdict on Section 377, a lot of organisations still hesitate while hiring people from the LGBTQ community, thinking that it may hamper the organisation and its success.” Room For Improvement However, Vinod added that she and her team were trying to change that. She told AIM that ThoughtWorks India has 7 people from the LGBTQ community (who are out and open about it). But this ratio is still abysmal, considering that ThoughtWorks’ total strength in India is 1,500 people across six offices. To do away with these biases and have more diversity in the organisation, Vinod and her team began the five-month Interning With Pride programme by “sensitising” the rest of the team about diversity at the workplace. ThoughtWorks also conducted anti-sexual harassment training an anti-discrimination training, in hopes to build a culture where people could talk, interact and exchange ideas freely. Looking Forward This five-month programme, which was organised at the company’s Hyderabad office, concluded earlier this month. They had received 57 applications, out of which 4 were shortlisted. Now that the interns have successfully completed the programme, ThoughtWorks has extended job offers to all the 4 interns. Karthik D and Jayanthi, who were among the four interns who were offered a job, said, “This programme helped hone our technical and communication skills amidst open conversations. The intensive training program was set in an environment that encouraged discussion of the LGBTQ history, culture, and issues. This experience affirmed our belief in the power of LGBTQ allies. We are extremely excited to work at ThoughtWorks.”
|
It has been reported widely that artificial intelligence is prone to biases. Since it is humans who write the algorithms, collect, compile and feed in the data, prejudices have been seen in many AI and machine learning models. Some of these biases are based on gender, race and sexual orientation, among others. This problem of […]
|
["AI Features"]
|
["AI (Artificial Intelligence)", "Data Science", "lgbtq", "thoughtworks"]
|
Prajakta Hebbar
|
2018-12-20T05:56:23
|
2018
| 570
|
["Go", "artificial intelligence", "machine learning", "lgbtq", "AI", "ML", "RAG", "Aim", "thoughtworks", "analytics", "Data Science", "R", "Java", "AI (Artificial Intelligence)"]
|
["AI", "artificial intelligence", "machine learning", "ML", "analytics", "Aim", "RAG", "R", "Go", "Java"]
|
https://analyticsindiamag.com/ai-features/this-company-takes-lgbtq-communitys-participation-in-ai-seriously/
| 3
| 10
| 1
| false
| false
| false
|
48,229
|
Reliance Jio, Airtel and Huawei Battle It Out With Next-Gen AI Solutions At India Mobile Congress
|
Reliance Jio, Airtel and Huawei Battle It Out With Next-Gen AI Solutions At India Mobile Congress The telecom sector’s AI prowess was on full display at the recently concluded India Mobile Congress organised by the Department of Telecommunications (DoT) and Cellular Operator Association of India (COAI) between October 14 and 16 at Aerocity, Delhi. The event attracted various leading vendors in the telecom, networking, and other technology sectors who unveiled various products in IoT, 5G and AI. Here are the key takeaways from this year: Jio India’s biggest telecom company that ousted Bharti Airtel to lead the Indian market unveiled its patent-filed artificial intelligence-powered Video Call Assistant at India Mobile Congress (IMC) 2019. This all-new AI bot would allow businesses to take their customer service to the next level by allowing them to automate their communications. The company also launched Jio smart security with AI-centric security and surveillance, involving smart cameras and automatic car entry; managed society Wi-Fi and communication; Smart Building solutions using IoT. Using Jio’s narrowband IoT showcased at IMC, you can monitor real-time outdoor lighting, waste management, guard efficiency, water consumption, diesel generators and fire safety. Jio also unveiled SG-LAN, an intelligent solution for converged access with Radisys. Airtel Bharti Airtel too unveiled industrial solutions for smart factories and other innovations like next-generation data centre that have the potential of driving deep analytics for great customer experience. Other solutions included adaptive traffic control system, city-wide surveillance, traffic enforcement system. Airtel’s AI-powered video analytics security and surveillance program integrates advanced features like facial recognition and object detection. Airtel showcased the next level of remote communication in healthcare, allowing doctors to manoeuvre a robot arm in real-time. Huawei The company showcased the country’s first smart massive MIMO optimisation based on AI technology for Vodafone Idea network. The MIMO optimisation helps create a pre-5G network for the participating networks which will be made to full-fledged 5G network in coming time. The technology adds automation capabilities to its network, improving optimisation efficiency, boosting cell capacity and enhancing end-user experience of the network. Industrial-level innovation that targets customer experience is at the heart of Huawei’s future vision in India. The telecom giant also rolled out Intelligent Operation Center (IOC)- a solution specifically designed for India’s smart cities of the future. Qualcomm Amercian semiconductor giant Qualcomm showcased its WiFi 6 products which have higher throughput in congested environments and provides low latency. Other product offerings from Qualcomm included a smart indoor camera which provides live remote monitoring with a 2-way talk using AI-powered person/face detection, 5G connected ambulances and a smart glove that performs ultrasounds. Qualcomm collaborated with Ericsson to demo India’s first 5G video con millimetre-wave (mmWave) spectrum. Apart from that, Qualcomm also demonstrated a digital walkie talkie has a voice-over LTE allowing users to make video calls, VoIP, share texts and multimedia to users using the technology. Startup India Startup India presented smart network connectivity solutions ranging from next-generation WiFi, WiFi for 5G, and India’s first automated cloud platform. Startup India introduced Sensorise which is an M2M service provider with the practice of supplying end to end solution in life cycle management of remote asset e-sim and IoT devices. Overview At a time when the country’s telecom sector is going through a time and riddled with various financial challenges, India Mobile Congress gave a ray of hope for the industry. Leveraging the power of 5G, AI and IoT, hundreds of demos proved India’s telecom will eventually recover strongly as the country’s move towards the $ 5 trillion GDP target. Next to watch is 5G spectrum allocation which will take place in the current fiscal. It is expected that the telecom players will likely bid aggressively to cement their showcased innovations.
|
Reliance Jio, Airtel and Huawei Battle It Out With Next-Gen AI Solutions At India Mobile Congress The telecom sector’s AI prowess was on full display at the recently concluded India Mobile Congress organised by the Department of Telecommunications (DoT) and Cellular Operator Association of India (COAI) between October 14 and 16 at Aerocity, Delhi. The […]
|
["AI Features"]
|
["5G", "Airtel", "Huawei", "Jio", "Qualcomm", "Vodafone"]
|
Vishal Chawla
|
2019-10-18T15:00:24
|
2019
| 619
|
["Jio", "Go", "5G", "artificial intelligence", "AI", "R", "Git", "RAG", "Ray", "Airtel", "Qualcomm", "analytics", "object detection", "GAN", "Huawei", "Vodafone"]
|
["AI", "artificial intelligence", "analytics", "Ray", "RAG", "object detection", "R", "Go", "Git", "GAN"]
|
https://analyticsindiamag.com/ai-features/india-mobile-congress-5g-jio-airtel-huawei/
| 2
| 10
| 4
| false
| false
| false
|
10,054,727
|
SceneFormer: A Transformer for 3-D Indoor Scene Generation
|
As a result of different interior designs and living activities, real-world indoor scenes vary greatly in terms of the number, type, and layout of objects placed in a room. Have you ever wondered how these indoor scenes are now generated by leveraging the power of AI in practical issues like object dimensions, object location, accurate mapping, and so on? In this post, we’ll take a look at a transformer, named SceneFormer, that can generate a realistic 3D indoor scene. The key points to be discussed are listed below. Table Of Contents What is Indoor Scene Generation How SceneFormer Address the Task?Layout and Architectural DetailsData PreparationHow SceneFormer is Used? Let’s start the discussion by understanding what is indoor Scene Generation What is Indoor Scene Generation For 3D content creation, creating realistic 3D indoor scenes has a wide range of real-world applications. Real estate and interior design firms, for example, can quickly visualize a furnished room and its contents without having to rearrange any physical items. Such a room can be presented using augmented or virtual reality platforms, such as a headset, allowing a user to walk through and interact with their future home. Manually modelling indoor scenes in a room with a variety of realistic object layouts is a time-consuming task that requires professional skills. In two steps, automatic scene generation techniques attempt to model the properties and distributions of objects in real-world scenes and generate new 3D scenes. These methods determine the layout (i.e. orientation and position) and properties (e.g. type and shape) of the objects in a room of a specific size and shape. Then, based on the object’s properties, they retrieve a Computer-Aided Design (CAD) model of each object from a 3D object database and place the resulting CAD models in the room according to their layout. A graph is a natural representation of a scene, in which each object is a node and an edge is a relationship between objects (for example, ‘is next to’ or ‘is on top of’). Walls, doors, and windows are examples of room features that can be represented as nodes in a graph. When the model is autoregressive on the graph and can generate one node at a time, the input is initialized with the required object nodes and then the model is expanded repeatedly. Such a representation lends itself well to graph convolutional networks processing. A scene can be represented by a top-down view of the objects, walls, and floor. Object properties are predicted as continuous or discrete values, and walls and floors are predicted as binary images. This can be used to represent arbitrary room shapes and sizes by normalizing the room dimensions to a known maximum dimension along each axis. Modern CNN architectures, such as ResNet, can help with image representations. Now let’s discuss how SecneFormer takes this task How SceneFormer Address the Task? The task of scene generation from a room layout is addressed by Xinpeng Wang et al, who generate a set of objects and their arrangements in the room. Each object is given a predicted class category, a three-dimensional location, an angular orientation, and a three-dimensional size. The most relevant CAD model for each object is retrieved from a database and placed in the scene at the predicted location once this sequence has been generated. A CAD model’s usefulness can be determined solely by its size, shape descriptor, or other heuristics such as texture. CAD model selection reduces object collisions, allows for special object properties like symmetry, and ensures object style consistency. SceneFormer is a set of transformers that autoregressively predict the class category, location, orientation, and size of each object in a scene, based on the idea that a scene can be treated as a sequence of objects. It has been demonstrated that such a model can generate realistic and varied scenes with little domain knowledge or data preparation. Furthermore, it does not use any visual information as input or as an internal representation, such as a 2D rendering of the scene. In summary, SceneFormer converts scene generation into a sequence generation task by producing an indoor scene as a sequence of object properties. It relies on the self-attention of transformers to learn implicitly the relationships between objects in a scene, obviating the need for manually annotated relationships. It can also create complex scenes by predicting their 3D locations using discretized object coordinates based on the room layout or text descriptions. It constructs conditional models using the Transformer decoder’s cross-attention mechanism. Layout and Architectural Details The layout of SceneFormer (source) It takes as input the room layout, which describes the shape of the room and the locations of doors and windows, as shown in the above layout. The SceneFormer model sequentially generates the properties of the next object before inserting it into the existing scene. The final output scene can be seen on the right. Data Preparation Each scene is treated as a sequence of objects in the data preprocessing, ordered by the frequency Fci of their class categories Ci in the train set. This object ordering, up to the ordering within objects of the same class, is required to produce a unique representation of each scene. The object’s location is then normalized by the maximum room size and quantized into the range [0, 255], yielding the object’s new coordinates (x, y, z). Similarly, the length, width, and height of each object are scaled and quantized. They then added start and stop tokens to each sequence, indicating the start and end, before padding the sequence to the maximum length available in the train set. How SceneFormer is used? The model architecture is depicted in the diagram below. To generate object property sequences, we use the transformer decoder. Researchers trained a separate model to predict the corresponding token of the current object for each of the four properties. The properties of objects are predicted in an autoregressive way, with each object’s properties conditioned on the properties of previously predicted objects. Architecture SceneFormer (Source) In the above-mentioned architecture Start tokens are shown in green, existing object tokens are shown in grey, and new object tokens are shown in yellow. Padding and stop tokens are not included. As input, all models take three types of sequences: category, orientation, and location. Except in the case of the dimension model, their outputs are appended to the existing sequence before being passed on. A model with N output tokens is run N times, with each step producing one token. Now let’s quickly discuss how the above transformer represents the sequence, embedding, and inference procedure. Sequence Representation Multiple sequences are fed into each model. Because each object’s location (x, y, z) and dimension (l, w, h) are three-dimensional, the input sequences for location and dimension are constructed by concatenating tuples of (xi, yi, zi)i and (li, wi, hi)i. As a result, the remaining input sequences should be repeated three times. The corresponding input sequence is shifted to the left by one token to condition on properties of the current object during training. Embedding Procedure For each model, it employs learned position and value embeddings of the input sequences. The object’s position in the sequence is indicated by the position embedding. The token’s value is indicated by the value embedding. Researchers have added another embedding layer to the location and dimension models to indicate whether the token is an x, y, or z coordinate for the location sequence and whether the token is l, w, or h for the dimension sequence. The embeddings of all sequences are then added together. With a linear layer, the output embedding is converted to N logits, where N is the number of possible quantized values. With a cross-entropy loss, each network is trained independently. Inference Procedure Object properties are generated in the order of class category, orientation, location, and dimension during inference. The corresponding sequence is appended with the new token and given as input to the next model once a new token is generated. Each of the location and dimension models is run three times, yielding three different output tokens (x, y, z) and (l, w, h), respectively. The model selects the token with the highest score from the other three models using probabilistic nucleus sampling on the category model outputs with p = 0.9. The sequence is terminated if any model outputs a stop token. The below example shows How SceneFomer generates an indoor scene by supplying the layout of your room. Source Final Words We have seen a transformer-based model SceneFormer in this post, which aims to generate 3D realistic indoor scene results in real-time applications by leveraging a combination of transformer models. SceneFormer allows for flexible data learning, implicit object relations learning, and quick inference. This may make it possible to generate interactive scenes from partial scenes. Reference Official Research PaperOfficial Git-Hub Repository
|
For 3D content creation, creating realistic 3D indoor scenes has a wide range of real-world applications.
|
["AI Trends"]
|
["Data Science", "Deep Learning", "Generative Pre-Trained Transformer", "Machine Learning", "Transformers", "Vision Transformers"]
|
Vijaysinh Lendave
|
2021-12-04T10:00:00
|
2021
| 1,470
|
["Go", "Vision Transformers", "TPU", "attention mechanism", "AI", "Machine Learning", "Transformers", "Git", "RAG", "Aim", "CNN", "Generative Pre-Trained Transformer", "Deep Learning", "Data Science", "R"]
|
["AI", "Aim", "Transformers", "RAG", "TPU", "R", "Go", "Git", "attention mechanism", "CNN"]
|
https://analyticsindiamag.com/ai-trends/sceneformer-a-transformer-for-3-d-indoor-scene-generation/
| 3
| 10
| 0
| true
| true
| true
|
10,085,345
|
Google vs India, the Fight Intensifies
|
The Competition Commission of India has been chasing Google since October 2022 against the backdrop of antitrust laws. With hefty fines being levied by the Indian government, the tech giant has warned about a potential price hike in the smartphone market and other issues that the economy may face. In a blog post last week, Google raised the alarm that devices built on ‘forks’ would prevent the big tech from securing them from cybercrime, bugs, and malware, as these versions will not support Google’s security and user safety features. Although currently, Google holds itself accountable for Play Store applications as well as compliance with local laws, the same may not be available for apps sideloaded from other sources. Moreover, in a forked Android environment, developers will have to prioritise various incompatible ‘forks’ they write and maintain apps for, as the costs are bound to increase with additional versions they support. This will refrain users from accessing important online services simply because developers may not be able to invest in developing applications for their device, said the smartphone maker. The Antitrust Chronicle In 2022, CCI imposed a whopping INR 1,337 crore fine on Google for misusing its dominant position in the Android mobile ecosystem and INR 936 crore for abusing its monopoly through Play Store. The commission also held Google responsible for getting into one-sided agreements with smartphone makers to dominate the Android ecosystem. Google filed its application before the NCLAT at the end of December, challenging the CCI’s October 2022 order imposing the penalty. Last year, in a similar case, Google paid 4.125 billion Euros to the European Union after losing its appeal in an antitrust case linked to its Android operating system. Chief Justice DY Chandrachud asked Google’s counsel if the company was ready to have the same standards in India as it did in Europe. He also said that the CCI order was passed on October 20, but Google went before the NCLAT much later and could have filed an appeal earlier, as per LiveLaw. India Plays Hard to Get A project to create a mobile operating system called ‘IndOS’ is reportedly in the works by the Indian government. A government official has confirmed to Business Standard that the new OS would be safer, give more options and offer stiff competition to Google and Apple. As of December 2022, Android held a major 96% market share in the mobile OS Indian market. Moreover, in 2022, the annual application downloads in India hit a new high of 29 billion, making it the second biggest app market globally, after China, offering developers a strong platform to establish viable businesses on Play. However, as the Indian market becomes increasingly hard to ignore for Big Tech, regulatory pressure is also making companies like Google anxious. After coming under heavy scrutiny, it seems that Google is trying to reconcile with the Indian government with a billion-dollar investment through Google for India Digitization Fund, and purchasing a stake in Jio Platforms. In 2022, Google launched ‘India Ki Udaan‘ to mark 75 years of Independence. Meanwhile, Google also announced its collaboration with the Ministry of Culture, alongwith a grant of $1 million to IIT Madras for setting up a Centre for Responsible AI. Besides, Google for India, 2022, which was held with much fanfare in New Delhi, witnessed the declaration of a plethora of such programmes in their commitment to building a more inclusive, helpful, and safer internet for every Indian. The scale at which tech is expanding and touching people’s lives worldwide makes it imperative to have tighter regulations in place. Countries need to consider how best to safeguard their citizens, be it privacy or security. “It’s an important phase and we are engaging constructively, Pichai said while elaborating on the company’s views on India’s tech regulations.
|
Google’s veiled threat to India: Cost, security not in our hands anymore
|
["Global Tech"]
|
["Sundar Pichai"]
|
Tasmia Ansari
|
2023-01-18T11:00:00
|
2023
| 633
|
["Go", "AWS", "AI", "cloud_platforms:AWS", "programming_languages:R", "programming_languages:Go", "Git", "responsible AI", "Rust", "R", "Sundar Pichai"]
|
["AI", "AWS", "R", "Go", "Rust", "Git", "responsible AI", "cloud_platforms:AWS", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/global-tech/google-vs-india-the-fight-intensifies/
| 3
| 10
| 2
| false
| false
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.