This is my work in progress ~50cm Robotic Arm with 6 AXIS Servo motors, an Arduino and a RaspberryPi (for image recognition in phase 2):
While writing the C++ code in Arduino I found I needed a way to send easily and fast commands to Arduino with my iPhone using Bluetooth.
Basically I need to control my robotic arm using an external device instead of running and running again the code on the Arduino board.
For this reason: I’ve attached a BLE board to Arduino and I have created a simple app (completely written in SwiftUI 😍😎) that use BLE connection to connect to Arduino BLE board and send string commands that are parsed and executed.
Today I want to share a simple way to retrieve user data from a native iOS cache image, automatically generated when your app goes in background.
You can retrieve this kind of informations if your phone was lost or Jailbreaked or connecting it to your pc and open an old backup. You can use tools like iBackup Viewer and more…
A simple tutorial for find and identify the wrong typed words of the user in your application using the UITextChecker framework and the NatualLanguage framework.
Starting from iOS 11, Apple introduces a new framework called Vision.
The Vision framework performs face and face landmark detection, text detection, barcode recognition, image registration, and general feature tracking. Vision also allows the use of custom Core ML models for tasks like classification or object detection.