• 4 AI Things
  • Posts
  • When AI Hype Meets the Venture Capital Pause Button

When AI Hype Meets the Venture Capital Pause Button

AI winter approaching

Happy Monday, everyone. đŸ˜€ I’m quite nervous this week since I have my practical German License exam. The stories about German driving exams being crazy hard are true. I hope self-driving cars should save my ass in the near future. Let’s start this week with AI research, advancements and much more.

Here are the 4 interesting AI things that I learned and enjoyed this week.

In the world of AI and physics, finding the best solution from all possible options is a common challenge. Take, for example, the swinging pendulum problem, where we need to figure out the most efficient way to move a pendulum. Traditional methods like gradient descent or genetic algorithms can be costly and may not work well with such complex systems. Physics-Informed Neural Networks (PINNs) use a neural network architecture that's informed by physical laws to determine the loss, which is a measure of how far off the predictions are. This diagram shows how PINNs take into account the governing physical equations, constraints like the pendulum's starting and ending points, and the goal, like the shortest time to swing. By adjusting the network to minimize this physics-based loss, PINNs can efficiently solve the optimization problem and find the best solution for the pendulum's motion.

AI Meme

The buzz around OpenAI's latest funding round isn't as loud this time, with some big VC firms deciding to sit it out, hinting at a cool-down in the generative AI frenzy. The skepticism stems from concerns over the lofty valuations of AI startups versus their profit-making capabilities. OpenAI, despite its success, is facing internal strategy disputes and increasing competition, adding to the uncertainty. Venture capitalists are now looking for clearer paths to profitability from these AI ventures, beyond just the promise of tapping into vast markets. The article suggests that without demonstrating sustainable business models, more investors might remain on the sidelines.

In our last edition we talked about Bayesian Optimization. This week we will be exploring perceptron. Perceptrons are building blocks of neural networks. Just like neurons in the brain receive signals through dendrites, process these signals in the cell body, and then send out a response through the axon, a perceptron in a neural network takes in various inputs, weighs their importance, and outputs a decision. In both scenarios, whether a neuron fires a response or a perceptron activates an output depends on the cumulative strength and relevance of the incoming signals. A perceptron evaluates student performance by weighing inputs like test scores and grades, summing these to apply a step function that decides if a student is a high or low performer based on a threshold. This effectively separates students into groups by plotting them on a graph and drawing a boundary line between high and low performers. By adjusting weights and thresholds, the perceptron learns from data, improving its decision-making ability and serving as a foundational element for more complex neural networks.

To develop Gen AI apps locally without needing GPUs, you can follow these practical steps:

  1. Install LocalLLM: Begin by installing the LocalLLM package, which allows for local development.

  2. Use Pre-Trained Models: Take advantage of pre-trained large language models provided by LocalLLM for development. For example, if building a text summarization app, choose a model specialized in understanding and condensing text

  3. Integrate with Your App: Embed the model into your app's code, ensuring it can process user input and return summaries. Check if the app accurately summarizes various lengths and types of text without internet delays

  4. Test Locally: Run and test your Gen AI applications directly on your local machine, without the need for cloud services or GPUs.

⏭ Stay curious, keep questioning.