Just a sneak peek at the articles – please go through them in detail when you get a chance (Click Here)

Physician Engagement Optimization: Reinforcement Learning-based Omni-Channel GenAI approach for Maximizing Email Open Rates and embracing Representative preferences to target HCPs - Author ASHISH GUPTA
Improve Customer Experience and Omnichannel Effectiveness through Customer Journey Analytics - Authors Jingfen Zhu, PhD Rakesh Sukumar, Ankit Majumder
Healthcare Provider (HCP) Behavior Assessment: Identifying latent subgroups of HCPs and Salesforce eSales Aid Impact Analysis - Authors Sachin Ramesh, Karthick Karuppusamy

top ai certification in 2024

Top AI Certifications for 2024. In the ever-changing world of… | by Philip Smith | Blockchain Council | Nov, 2023 | Medium

10 Valuable Artificial Intelligence Certifications for 2024 (analyticsinsight.net)

10 AI Certifications for 2024: Build Your Skills and Career | Upwork


Intel® Edge AI Certification

Jetson AI Courses and Certifications | NVIDIA Developer

Microsoft Certified: Azure AI Engineer Associate - Certifications | Microsoft Learn

Artificial Intelligence Certification | AI Certification | ARTIBA

Certified Artificial Intelligence Scientist | CAIS™ | USAII®

good introduction

Dear Norma, I hope this email finds you well. I am writing to express my strong interest in the HealthCare roles. I came across the job opening and was immediately drawn to the opportunity to collaborate with diverse lines of business and leverage data analytics and machine learning capabilities to drive actionable insights. With my background in Engineering/Technology and extensive knowledge in Data Engineering, Data Analytics, and Advanced Analytics, I am confident in my ability to uncover valuable enterprise insights and implement data management applications that contribute to operational effectiveness. My passion for Machine Learning and Artificial Intelligence has led me to develop end-to-end ML workflows, including data collection, feature engineering, model training, and deploying models in production. Throughout my career, I have utilized Python, PySpark, and SQL to build robust backend solutions and employed visualization tools such as Power BI and Tableau to effectively communicate data insights. Additionally, I have hands-on experience working with cloud platforms like Azure and have expertise in creating ETL pipelines and leveraging distributed computing for scalability. One of the aspects that excites me the most about this role is the opportunity to operationalize and monitor machine learning models using MLflow and Kubeflow while applying DevOps principles to ensure smooth deployment and management. I am also experienced in designing executive dashboards that provide actionable insights, empowering decision-making at all levels. With a bachelor's degree in mathematics and a master's degree in a quantitative field like Artificial Intelligence, I am well-equipped to tackle complex data challenges and provide innovative solutions. My 11+ years of experience in data science settings My functional and technical competencies encompass a wide array of skills, including data analytics, data engineering, cloud technologies, and data science, making me confident in my ability to contribute effectively to the success of Global Solutions. If possible i would like to discuss further how my qualifications align with the role's requirements and how I can be a valuable addition to the team. I am eagerly looking forward to the opportunity to connect and explore this exciting career prospect further. Best regards, Salman Ahmed 

+44-7587652115

AI Prompt Engineering

Understanding Big Language Models:

1. DALL-E 2 (Open.AI)
2. Stable Diffusion (Stability.AI)
3. Midjourney (Midjourney)
4. Codex - Github Copilot (Open.AI)
5. You.com (You.com)
6. Whisper AI (Open.AI)
7. GPT-3 Models (175B?) (Open.AI)
8. OPT (175B and 66B) (Meta)
9. BLOOM (176B) (Hugging Face)
10. GPT-NeoX (20B) (Eleuther.AI)

Topics where user can contribute:

  • Retrieval augment in-context learning
  • Better benchmarks
  • "Last Mile" for productive applications
  • Faithful, human-interpretable explanations. 

Prompt Engineering Overview:

At the very basic we have interface to interact with a language model, where we pass some instruction and the model passes a response. The response is generated by the language model.

A prompt is composed with the following components:

  • Instructions
  • Context (this is not always given but is part of more advanced techniques)
  • Input Data
  • Output Indicator

Settings to keep in mind:

  • When prompting a new language model you should keep in mind a few settings
  • You can get very different results with prompts when using different settings
  • One important setting is controlling how deterministic the model is when generating completion of prompts:
    • Temperature and top_p are two important parameters to keep in mind.
    • Generally, keep these low if you are looking for exact answers like mathematics equation answers
    • ... and keep them high for more diverse responses like text generation, poetry generation.

Designing prompts for Different Tasks:

Tasks Covered:

  • Text Summarization
  • Question Answering
  • Text Classification
  • Role Playing
  • Code Generation
  • Reasoning
      Prompt Engineering Techniques: Many advanced prompting techniques have been designed to improve performance on complex tasks.
      • Few-Shot prompts
      • Chain-of-Thought (CoT) prompting
      • Self-Consistency
      • Knowledge Generation prompting
      • ReAct


      Tools & IDE's : Tools, libraries and platforms with different capabilities and functionalities include:

      • Developing and Experimenting with Prompts
      • Evaluating Prompts
      • Versioning and deploying prompts
      • Dyno
      • Dust
      • LangChain
      • PROMPTABLE

      Example of LLMs with external tools:

      • The generative capabilities of LLMs can be combined with an external tool to solve complex problems.
      • The components you need:
        • An agent powered by LLM to determine which action to take
        • A tool used by the agent to interact with the world (e.g. search API, Wolfram, Python REPL, database lookup)
        • The LLM that will power the agent.

      Opportunities and Future Directions:

      • Model Safety: This can be used to not only improve the performance but also the reliability of response from a safety perspective.
        • Prompt engineering can help identify risky behavior of LLMs which can help to reduce harmful behaviors and risks that may arise from language models.
        • There is also a part of the community performing prompt inject to understand the vulnerability of LLMs.
      • Prompt Injection: it turns out that building LLMs, like any other systems comes with safety and challenges and safety considerations. Prompt injection aim to find vulnerabilities in LLMs.
        • Some common issues include:
          • Prompt Injection
          • Prompt Leaking: It aims to force the model to spit out information about its own prompt. This can lead to leaking of either sensitive, private or information that is confidential. 
          • Jailbreaking: Is another form of prompt injection where the goal is to bypass safety and moderation features.
            • LLMs provided via API's might be coupled with safety features or content moderation which can be bypassed with harmful prompts/attacks.
      • RLHF: Train LLM's to meet a specific human preference. Involves collecting high-quality prompt datasets. 
        • Popular Examples : 
        • Claude (Anthropic)
        • ChatGPT (OpenAI)
      • Future Directions include:
        • Augmented LLM's
        • Emergent ability of LLM's
        • Acting / Planning - Reinforcement Learning
        • Multimodal Planning
        • Graph Planning





      A token is ChatGPT is roughly 4 words.

      LLM Chat GPT

      Some notes on Recurrent Neural Network: A neural network which has a high hidden dimension state. When a new observations comes it updates its high hidden dimension state.

      In machine learning there is lot of unity in principles to be applied to different data modalities. We use the same neural net architecture, gradients and adam optimizer to fine tune the gradients. For RNN we use some additional tools to reduce the variance of the gradients. For example: using CNN for image learning or Transformers to NLP problems. Years back in NLP for every tiny problem there was a different architecture. 

      Question : Where does vision stop and language begin

      1. Proposed future is to develop Reinforcement Learning  techniques to help supervised learning perform better.
      2. Another are of active research is Spike-timing-dependent plasticity. The concept of STDP has been shown to be a proven learning algorithm for forward-connected artificial neural network in pattern recognition. A general approach, replicated from the core biological principles, is to apply a window function (Δw) to each synapse in a network. The window function will increase the weight (and therefore the connection) of a synapse when the parent neuron fires just before the child neuron, but will decrease otherwise.

      With Deep learning we are looking at a static problem with a probability distribution and applying the model to the distribution.

      Back Propagation is useful algorithm and not go away, because it helps in finding a neural circuit subject to some constraints.

      For Natural Language Modelling it is proven that very large datasets work because we are trying to predict the next word by broad strokes and surface level pattern. Once the language model becomes large, it understand the characters, spacing, punctuations, words, and finally the model learns the semantics and the facts.

      Transformers is the most important advance in neural networks. Transformers is a combination of multiple ideas in which attention is one in which attention is a key. Transformers is designed in a way that it runs on a really fast GPU. It is not recurrent, thus it is shallow (less deep) and very easy to optimize.

      After Transformers to built AGI, research is going on in Self Play and Active Learning.

      GAN's don't have a mathematical cost function which it tries to optimize by gradient descent. Instead there is a game in which through mathematical functions it tries to find equilibrium.

      Other example of deep learning models without cost function is reinforcement learning with self-play and surprise actions. 


      Double Descent:

      When we make neural network larger it becomes better which is contrarian to statistical ideas. But there is a problem called the double descent bump as shown below;

      Double descent occurs for all practical deep learning systems. Take a neural network and start increasing its size slowly while keeping the dataset size fixed. If you keep increasing the neural network size and don't do early stopping then, there is increase in performance and then it gets worse. It the point the model gets worst is precisely the point at which the model gets zero training error or zero training loss and then when we make it larger it start to get better again. It counter-intuitive because we expect the deep learning phenomenon to be monotonic.

      The intuition is as follows:

      "When we have a large data and a small model then small model is not sensitive to randomness/uncertainty in the training dataset. As the model gets large it achieves zero training error at approximately the point with the smallest norm in that subspace. At the point the dimensionality of the training data is equal to the dimensionality of the neural network model (one-to-one correspondence or degrees of freedom of dataset is same as degrees of freedom of model) at that point random fluctuation in the data worsens the performance (i.e. small changes in the data leads to noticeable changes in the model). But this double descent bump can be removed by regularization and early stopping."

      If we have more data than parameters or more parameters than data, then model will be insensitive to the random changes in the dataset.

      Overfitting: When model is very sensitive to small random unimportant stuff in the training dataset.

      Early Stop: We train our model and monitor our performance and at some point when the validation performance starts to become worse we stop training (i.e. we determine to stop training and consider the model to be good enough)


      ChatGPT:

      ChatGPT has become a water-shed moment for organization because all companies are inherently language based companies. Whether it is text, video, audio, financial records all can be described as tokens which can be fed to large language models.

      A good example of this is when during training of ChatGPT on amazon reviews, they found that after large amount of training the model became an excellent classifier of sentiment. So the model from predicting the next word (token) in a sentence, started to understand the semantics of the sentence and could tell if the review was a positive or negative.

      With Advancement of AI, we have a likeness of a particular person as a separate bot, and the particular person will get a say, cut and licensing opportunities of his likeness.

      Great Google Analytics courses and Google Material on Udemy

      All the material is for for getting certification for Google Universal Analytics or GA3, but the material will also help to prepare for GA4. Unfortunately GA4 is very new and very few people are using it. 

      Udemy:

      https://www.udemy.com/share/101YUA3@1ZQpoeanMxxthiBi3TRUePtvhK8jpKedLNfathrLsI_5x8FtERy5aZusAp5R/


      This one is excellent resource before the exam

      https://www.udemy.com/share/1057WK3@B0vqy8cXKsPzaotyxGtf8OMJUbk6LabDRa9MvahhOqCaaXBprgawEPRvwRFK/


      Google Material

      https://skillshop.exceedlms.com/student/catalog/list?category_ids=6431-google-analytics-4

      https://skillshop.exceedlms.com/student/path/2938