[go: up one dir, main page]

WO2020021319A1 - Balayage à réalité augmentée d'un objet du monde réel ou entrée dans un périmètre virtuel pour afficher des objets virtuels et affichage des activités du monde réel dans le monde virtuel ayant une géographie correspondante du monde réel - Google Patents

Balayage à réalité augmentée d'un objet du monde réel ou entrée dans un périmètre virtuel pour afficher des objets virtuels et affichage des activités du monde réel dans le monde virtuel ayant une géographie correspondante du monde réel Download PDF

Info

Publication number
WO2020021319A1
WO2020021319A1 PCT/IB2018/056071 IB2018056071W WO2020021319A1 WO 2020021319 A1 WO2020021319 A1 WO 2020021319A1 IB 2018056071 W IB2018056071 W IB 2018056071W WO 2020021319 A1 WO2020021319 A1 WO 2020021319A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
user
real world
types
business
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IB2018/056071
Other languages
English (en)
Inventor
Yogesh Chunilal Rathod
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US16/105,025 priority Critical patent/US20180350144A1/en
Priority to US16/104,973 priority patent/US11103773B2/en
Priority to US16/104,980 priority patent/US20180345129A1/en
Publication of WO2020021319A1 publication Critical patent/WO2020021319A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • AR augmented reality
  • a game having a virtual world geography that correspondences the real world geography, as a result, as the player continuously moves about or navigates in a range of coordinates in the real world, the player also continuously moves about in a range of coordinates in the real world map or virtual world and responsive to the client device being within a predefined geofence boundary or a set distance of the location of the business in the real world; receiving, by the game server, augmented reality scanning or scanned data or raw photo or captured photograph, identifying or recognizing, by the game server, an object in the photograph or scanned data and based on the identified object satisfying the object criteria associated with the virtual object in the stored data, display or provide, by the game server, the virtual object and associated data including virtual money to the client device.
  • a game hosting, at a game server, a game, the game having a virtual world geography that correspondences the real world geography, as a result, as the player continuously moves about or navigates in a range of coordinates in the real world based on monitoring and tracking current location of the client device, the player also continuously moves about in a range of coordinates in the real world map or virtual world; accessing pre-defmed geo-fence in real world and associated virtual objects; and responsive to the client device being within a pre-defmed boundary of geofence in the real world or enters in geofence, display or provide, one or more types of one or more virtual objects.
  • display real world activities in 2D or 3D virtual world or real world map interface having corresponding real world geography.
  • At present some of the games enables user to locate and collect various types of virtual objects based on reaching at particular location or place or pre-defmed spots of physical world or real world and identifying spots based on provided photo of spot.
  • Pokemon Go TM enables user to identify and get particular type of Pokemon at particular location, pre-defmed place or spot or location, gym and like.
  • All of the prior arts does not enable player to scan particular object or item or product or one or more types of elements in real world or physical world or scan particular object or item or product or one or more types of elements at particular location or place or point of location or pre-defmed geofenced boundary in real world or physical world and in the event of augmented reality scanning or taking a photograph of particular object or item or product or one or more types of elements in real world or physical world or scan particular object or item or product or one or more types of elements at particular location or place or point of location or pre-defmed geofenced boundary in real world or physical world, displaying one or more types of virtual object or virtual element for enabling user to select, collect, win, battel or play and win, claim, compete, capture from said displayed one or more virtual objects or virtual elements or automatically providing one or more types of virtual object or virtual element to user or associate with or add to user’s account.
  • Pokemon Go TM also does not enable player to visit pre-defmed geofence boundary and view within said pre-defmed geofence boundary, one or more types of virtual objects and select, collect, win, battel or play and win, claim, compete, capture from said displayed one or more virtual objects or virtual elements or automatically providing one or more types of virtual object or virtual element to user or associate with or add to user’s account.
  • U.S. Patent Number 9,669,296 of Hibbert; Chris et el discloses a computer-implemented method of providing a parallel reality game, comprising: hosting, at a game server, a parallel reality game, the parallel reality game having a virtual world with a geography that parallels at least a portion of the geography of the real world such that a player can navigate the virtual world by moving to different geographic locations in the real world; receiving, by the game server, a plurality of requests from a plurality of sponsors, each of the plurality of requests requesting that a virtual element associated with the request be included at a location in the virtual world corresponding to a location of a business in the real world, the business associated with a sponsor that provided the request to the game server from the plurality of sponsors; selecting, by the game server, at least one request from the plurality of requests; responsive to selecting the at least one request, modifying, by the game server, game data to include the virtual element associated with
  • U.S. Patent Application Number 13/345189 discloses managing, via the augmented reality application, a treasure or scavenger hunt in which multiple users are given clues to describe the locations associated with the one or more virtual objects in a predetermined sequence and subsequent virtual objects in the predetermined sequence only become visible to the multiple users upon collecting prior prerequisite virtual objects in the predetermined sequence.
  • the create menu may further include a "Treasure Hunt” option to deploy virtual objects to various worldwide locations and define clues that users may decipher to locate the virtual objects and thereby participate in a virtual scavenger hunt to locate and/or collect virtual objects or content or other virtual items embedded therein.
  • the Treasure Hunt option may make one or more virtual objects to be located and collected therein initially invisible, whereby the one or more initially invisible virtual objects may only become visible to any particular user participating in the Treasure Hunt in response to the user having suitably located and collected one or more previous virtual objects that are prerequisites to the initially invisible virtual objects.
  • AR augmented reality
  • user can plurality of ways can search, locate, identify, determine, view or show and accumulate, select and collect various types of virtual objects or virtual elements based on or detection or recognizing or identifying or determining of or monitoring, tracking, updating, storing and logging of or triggering of user’s one or more types of activities and actions including viewing particular movie or video or particular movie’s or video’s particular type of scene at particular place at particular date & time or position, listening of particular music or song or video, participating in particular event, one or more types of pre- defined senses and behaviours, posting or updating of status or user’s updates and associated keywords, communication, collaboration, connections, and interactions with one or more types of or named entities including persons, contacts, connections, groups, school, college, shop, object, tree, animal, items, and products, one
  • So present invention discloses displaying of one or more types of virtual objects or virtual elements to user for enabling user to search, locate, guess, find, collect, select, claim, and capture said displayed one or more types of virtual objects or virtual elements in the event of triggering of or identifying or recognizing or detection or determination of one or more type of user’s activities in physical world or real world based on plurality of ways, factors, aspects including scanning, capturing photo, recording video, scanning code including QR code, identifying particular location or place, recognizing or analyzing particular object or activity based on object recognition, sensors, devices, identifying keywords based on recognizing user’s voice based on voice detection, identifying text or keywords in scanned objects associated text via Optical Character recognition (OCR), based on reaching at particular or identified location or place or point, user’s past or current or surround locations, checked in places, current date & time and associated information, past or instructed or current activities, actions, triggers, participated or participating events, conducted actual or prospective transactions, current status, behaviours, human identified and scanned objects, scanning, particular
  • U.S. Patent 9,754,355 Chang; Sheldon et el discloses access filter data and object criteria, generate a photo filter using the filter data; store data specifying an association between the photo filter and the object criteria; detect that a client device of the server has taken a photograph; identify an object in the photograph; based on the identified object satisfying the object criteria associated with the photo filter in the stored data, provide the photo filter to the client device; store photo filters provided to the client device in a photo filter collection associated with the client device; determine that the photo filter collection includes more than a specified number of stored photo filters of a specified type, and provide a new photo filter to the client device in response to the determination.
  • U.S. Patent 9,225,897 of Sehn; Timothy et el discloses, identify when a client device captures a photograph; select photograph filters based upon attributes of the client device and attributes of the photograph, wherein the attributes of the client device include geolocation of the client device; supply the selected photograph filters to the client device.
  • U.S. Patent Application discloses a contextually intelligent communication system and processes acquire and processes data on the current context of a user who is using a connected mobile communication device such as a smart phone or tablet by using various sensors, image recognition or augmented reality residing in the connected device; providing additional data to define the user's current environment; combining the contextual data on the device with the additional data to define the user's complete, relevant context; gathering the user's current context data and updating the device by uploading the data via a wide area mobile network to a contextually intelligent server in the cloud, matching the user's current contextual data with the user's past and historical data and then downloading updated data back to the device, with the updated data including any of a variety of contextually relevant information such as feedback, experiences, recommendations, offers, coupons, advice, tactile feedback, content such as visual and audio representations, augmented reality, and other audio/visual displays to the device of the user that is predictably useful and relevant to the user's current context and future context as the user enters a new context.
  • a connected mobile communication device such
  • the object of the present invention is to receiving, updating, and storing information or data about plurality types of one or more attributes, characteristics, features, qualities and structured fields of locations, places, point of interests, particular location point, pre-defmed geo-fence boundaries and associated one or more types of objects, items, products, persons, articles, accessories and monitoring, tracking, updating, storing and logging user’s one or more types of activities, actions, events, transactions automatically from user device or based on user provided data or information including monitored or tracked current location or place or position or accurate location point of user device and associated information or data, current date & time and associated information or data, sensor data from one or more types of sensors of one or more types of user devices including smart phone, smart watch, and game device(s), one or more types of scanned data including scanning by user particular location or place, code or QR code or image, logo, product, item, person, animal, tree, scene or anything in physical world, captured photo, recorded video, provided status, provided answer for particular question, solving of particular puzzle, rule or instruction specific fulfillment or any combinations thereof and
  • the other object of the present invention is to display real world activities in 2D or 3D virtual world or real world map interface having corresponding real world geography.
  • the other object of the present invention is to enabling server or user of network to providing instruction or task message or providing tips or puzzle or description about attributes or characteristics or features or qualities or details or values of fields or structured or unstructured data of or related to or associated with particular place or location or position or point of particular location or place or one or more objects at said identified particular place or location or position or point of particular location or place to other users of network including one or more contacts or collections or groups or teams or particular team members to enabling receiving user of message to solve said message associated puzzle to identify particular place or identify one or more objects at said identified particular place or location or position or point of location or based on tips or provided details identify particular place or location or identify one or more objects at said identified particular place or location or position or point of location to reach there and search, find, identify, determine, select from displayed one or more virtual objects or virtual elements, collect, capture, claim, win and accumulate one or more types of one or more virtual objects or virtual elements or after reaching there based on one or more other factors including one or more rules including duration to reach, successfully conducting of required rule specific one
  • the other object of the present invention is to gamification of human life during conducting of or providing of or determination or detection or identification of or participation of or generating associated data of or triggering of human’s one or more types of activities, actions, events, transactions, status, behaviours, senses, communications, visited or current locations, places, point of interests, particular location point or ego-fence boundary, during interaction with one or more types of entities including people, persons, objects, places, infrastructure including school, college, shop, road, and club, sea, pond, tree, animal, bird by displaying identified or determined one or more types of virtual objects or virtual elements to user for enabling user to select, collect, win, battel or play and win, claim, compete, capture said displayed one or more virtual objects or virtual elements.
  • the other object of the present invention is to enabling brands or shops or local businesses or advertisers to define geo-fence surround there place of business or shop or office or infrastructure or establishments, optionally provide schedules or date & times or ranges of date & times (from -to date & times), provide or upload one or more virtual objects or virtual elements, provide associated rules including duration, offers limited to particular number of users only, to avail said virtual objects or virtual elements and associated offers user need to conduct one or more types of activities, actions, participate in particular event, conduct particular type of transaction, scan code including QR code, visit place or location or geo-fence boundary of business or shop, policies, privacy settings, one or more types of filters and target criteria for displaying said uploaded one or more virtual objects or virtual elements to only said target criteria or filter specific users and/or users who fulfill as per said rules, provide associated points, offers or schemes.
  • server validates said information including place or location of business and publisher and associated virtual objects or virtual elements and offers and receiving of associated required payments before displaying to users of network.
  • the term “receiving" posted or shared contents & communication and any types of multimedia contents from a device or component includes receiving the shared or posted contents & communication and any types of multimedia contents indirectly, such as when forwarded by one or more other devices or components.
  • “sending" shared contents & communication and any types of multimedia contents to a device or component includes sending the shared contents & communication and any types of multimedia contents indirectly, such as when forwarded by one or more other devices or components.
  • client application refers to an application that runs on a client computing device.
  • a client application may be written in one or more of a variety of languages, such as 'C', 'C++', 'C#', 'J2ME', Java, ASP.Net, VB.Net and the like. Browsers, email clients, text messaging clients, calendars, and games are examples of client applications.
  • a mobile client application refers to a client application that runs on a mobile device.
  • network application refers to a computer-based application that communicates, directly or indirectly, with at least one other component across a network.
  • Web sites, email servers, messaging servers, and game servers are examples of network applications.
  • Embodiments described herein accessing real world object associated virtual object data, location or geofence information and object criteria, generating a virtual object using the virtual object data; storing data specifying an association between the real world object associated virtual object, location or geofence information and the object criteria; detect that a client device of the server has conducting scanning or augmented reality (AR) scanning or taken a photograph or provide a raw photo or scanned data from particular real world object location; identifying or recognizing an object in the photograph or scanned data; and based on the identified object satisfying the object criteria associated with the virtual object in the stored data, display or provide the virtual object and associated data including virtual money to the client device.
  • AR augmented reality
  • a game hosting, at a game server, a game, the game having a virtual world geography that correspondences the real world geography, as a result, as the player continuously moves about or navigates in a range of coordinates in the real world, the player also continuously moves about in a range of coordinates in the real world map or virtual world; receiving, by the game server, a plurality of requests from a plurality of sponsors, each of the plurality of requests requesting that a virtual element associated with the request be included at a location in the virtual world corresponding to a location of a business in the real world, the business associated with a sponsor that provided the request to the game server from the plurality of sponsors; selecting, by the game server, at least one request from the plurality of requests; responsive to selecting the at least one request, modifying, by the game server, game data to include the virtual element associated with the at least one request in the game at the location in the virtual world requested by the at least one request; providing, by the game server, the modified gate data to a client device of a player; and
  • accessing virtual object data and associated object criteria and required one or more types of actions generating a virtual object using the virtual object data; storing data specifying an association between the virtual object and the object criteria; detect that a client device of the server has conducting scanning or augmented reality (AR) scanning or taken a photograph or provide a raw photo or scanned data; identifying or recognizing an object in the photograph or scanned data; based on the identified object satisfying the object criteria associated with the virtual object in the stored data, display or provide the virtual object to the client device or make eligible to client device claim, win, get virtual objects; and enable to take one or more actions to get, collect, catch, acquire, win, store or add to collection of virtual objects of user or user account and in the event of taking one or more required actions, storing virtual objects or adding to collection of virtual objects of user or user’s account or storing virtual objects provided to the client device in a virtual object collection associated with the client device.
  • AR augmented reality
  • displaying or storing the virtual object and associated virtual money to the client device based on validating location of augmented reality scanning or taking a photograph of real world object by sufficiently matching location of augmented reality scanning or taking a photograph of real world object with pre-defmed or stored location of real world object.
  • EXIF exchangeable image file format
  • the virtual object is provided to the user or player in response to the user or the player making a purchase of an object or product or service at the business of the sponsor and submitting and validating digital receipt or scanned receipt to the server.
  • one or more types of activities, actions, call-to-actions, participation comprise select virtual object, play displayed mini game, take instructed photo and provide photo, record instructed video and submit video, purchase one or more products or services and submit receipt of purchased one or more products or services of business of sponsor, check in place of business of sponsor, provide one or more types of requested or instructed details, refer or share products or services of business of sponsor, invite particular number of friends or contacts to join with business of sponsor, provide one or more types of requested or instructed reactions, view one or more types of details, presentation, demonstration, video of products or services of business of sponsor or business of sponsor, add to favorite or add to contact list or follow or connect with products or services of business of sponsor or business of sponsor.
  • the object criteria includes an association between an object and a product or service or brand or logo of a seller or a sponsor and the associated virtual object includes images associated with the product or service or brand or logo of the seller or sponsor.
  • virtual object use in virtual world.
  • virtual object comprises a one or more types of power in game, virtual item, virtual element, virtual reward, virtual money, virtual currency or other suitable virtual goods including geo-filter.
  • the virtual object is redeemable in the real world.
  • the virtual object is a coupon, a redeemable point, a gift, a sample, an offer, cash back, discount, or voucher redeemable in the real world.
  • the virtual object is provided to the user or the player in response to the player or the user making a purchase of an object or product or service at the business of the sponsor.
  • augmented reality scanning or a photograph or scanned data of said particular object in real world checking or validating actual or original date and time of received scanned data or captured photograph or image based on sufficiently matching received scanned data or a photograph or image associated Exchangeable image file format (EXIF) data including date and time of capturing photo or scanning with current date and time of server and validate location of said scanned object of real world based on sufficiently matching pre-defmed location of said scanned object of real world with monitored and tracked current location or place of user device who scanned or take a photograph of said object; identifying or recognizing an object in the photograph or scanned data; based on the identified object satisfying the object criteria including object model or image or object keywords associated with the virtual object in the stored data, display or provide the virtual object to the client device; storing virtual objects and associated data including virtual money provided to the client device in a virtual object collection associated with the client device.
  • EXIF Exchangeable image file format
  • EXIF exchangeable image file format
  • the virtual object is provided to the user or the player in response to the client device being within a pre-defmed geofence boundary or within a pre-defmed geofence boundary surrounds the location of the business of sponsor.
  • the virtual object is provided to the user or the player in response to the client device being within a pre-defmed geofence boundary or within a pre-defmed geofence boundary surround the location of the business of sponsor and the player or the user making a purchase of an object or product or service at the business of the sponsor.
  • accessing virtual object data and object criteria generating a virtual object using the virtual object data; storing data specifying an association between the virtual object and the object criteria; detecting that a client device of the server has conducting scanning or augmented reality (AR) scanning or taken a photograph or provide a raw photo or scanned data; identifying or recognizing one or more objects in the photograph or scanned data; based on the identified one or more objects or each object satisfying the relevant object criteria associated with the virtual object in the stored data, displaying or providing the virtual object and associated data including virtual money to the client device; storing virtual objects and associated data including virtual money provided to the client device in a virtual object and virtual money collection associated with the client device.
  • AR augmented reality
  • access one or more types of contents associated data and metadata and object criteria including one or more object models generate at least one virtual object or virtual element using the one or more types of contents associated data and metadata, and associate the at least one generated virtual object or virtual element with the object criteria; and a virtual object or virtual element engine comprising one or more processors and configured to: identify or determine that a client device of the server has scan particular object, product, item, code including QR code or taken a photograph or record a video in a real world, identify at least one object in the photograph or scanned data provided via camera application; and based on at least one of the identified objects satisfying the object criteria associated with the at least one generated virtual object or virtual element, provide at least one of the generated virtual object or virtual element to the client device.
  • EXIF exchangeable image file format
  • the virtual object or virtual element engine further comprises an object recognition module configured to identify objects in the photograph or received scanned data and compare each object against the object criteria.
  • the object criteria includes recognizing an object as an particular food item and the at least one associated virtual object or virtual element includes images associated with a virtual medal or badge, one or more types of virtual characters or avatars, emoticons, virtual goods, brand information of the sponsor, logo, points, virtual reward including virtual item, virtual energy, virtual currency or other suitable virtual reward.
  • the reward is redeemable in the real world.
  • the reward is a coupon, discount, or voucher redeemable in the real world.
  • the virtual object or virtual element publication module comprises a user- based virtual object or virtual element associated content, data and metadata upload module configured to receive the one or more types of contents associated data and metadata and object criteria including one or more object models for generating the virtual object or virtual element from a user and a user-based object criteria upload module configured to receive the object criteria from the user.
  • At least one generated virtual object or virtual element comprises a plurality of virtual objects or virtual elements; the virtual object or virtual element engine further comprises a virtual object or virtual element priority module configured to generate a ranking of the plurality of virtual objects or virtual elements associated with object criteria based on specified virtual object or virtual element priority criteria; and the virtual object or virtual element engine is configured provide a specified number of the plurality of virtual objects or virtual elements to the client device according to the ranking of the plurality of virtual objects or virtual elements.
  • the virtual object or virtual element priority criteria includes criteria based on a virtual object or virtual element creation date, a virtual object or virtual element type or a user ranking of the virtual object or virtual element.
  • the virtual objects or virtual elements include a brand associated with an establishment proximate to the geolocation of the client device.
  • a requests requesting that a one or more virtual objects or virtual elements associated with the request be included at a location or place or predefined particular geo-fence boundary or at a location or place or predefined particular geo-fence boundary as per pre-defmed schedule or start and end date of publication or availability of said virtual objects or virtual elements in the virtual world corresponding to a location or place or predefined particular geo-fence boundary of a business in the real world; validate request associated business; adding or including one or more virtual objects or virtual elements in the virtual world or associating with said business location or place said one or more virtual objects or virtual elements; receiving from the user scanned or captured photo of receipt of the purchase; based on receipt, validate actual purchase of one or more product or service by user from said business including unique business name, place or location, date & time of purchase, amount of purchase, quantity and names and details of one or more products or services, wherein identifying Exchangeable image file format (EXIF) data in scanned or photo of receipt including original date & time of scanned or captured photo of
  • validate request associated business based on recognizing business name and location from received scanned or captured photo of receipt of purchase based on object recognition and Optical Character Recognition (OCR) techniques and sufficiently match said recognized business location or place with user device’s current location who uploaded or submitted said scanned or photo of receipt.
  • OCR Optical Character Recognition
  • the reward is provided to the user or player in response to the client device being within a geo-fence boundary, set distance of the place or location of the business and validating the purchase made by player or user at the business by validating player or user uploaded or submitted scanned or captured photo of receipt of the purchase of a one or more products or services at the business.
  • a game hosting, at a game server, a game, the game having a virtual world geography that correspondences the real world geography, as a result, as the player continuously moves about or navigates in a range of coordinates in the real world based on monitoring and tracking current location of the client device, the player also continuously moves about in a range of coordinates in the real world map or virtual world; accessing pre-defmed geo-fence in real world and associated virtual objects; and responsive to the client device being within a pre-defmed boundary of geofence in the real world or enters in geofence or stay or dwell for pre-set or particular duration within geofence, display or provide, by the game server, one or more types of one or more virtual objects and associated data including virtual money to the client device or in the real world map or virtual world.
  • enabling to provide one or more schedules for availability of virtual objects within a pre-defmed geofence boundary or within a pre-defmed geofence boundary covers the location of the business of sponsor.
  • verifying geofenced boundary In an embodiment verifying geofenced boundary, associated payments, schedules and virtual objects and data before making them available to users of network.
  • the virtual object is provided to the user or the player in response to the client device being within a pre-defmed geofence boundary or within a pre-defmed geofence boundary covers the location of the business of sponsor.
  • the virtual object is provided to the user or the player in response to the client device being within a pre-defmed geofence boundary or within a pre-defmed geofence boundary covers the location of the business of sponsor and the player or the user making a purchase of an object or product or service at the business of the sponsor.
  • a unique push message contains unique code or identity via notification, display or provide, by the game server, said received unique code or identity associated or determined one or more types of one or more virtual objects and associated data including virtual money to the client device or in the real world map or virtual world.
  • a game hosting, at a game server, a game, the game having a virtual world geography that correspondences the real world geography, as a result, as the player continuously moves about or navigates in a range of coordinates in the real world based on monitoring and tracking current location of the client device, the player also continuously moves about in a range of coordinates in the real world map or virtual world.
  • a unique push message contains unique code or identity via notification, display or provide, by the game server, said received unique code or identity associated one or more types of one or more virtual objects and associated data including virtual money to the client device or in the real world map or virtual world.
  • virtual objects and associated data including virtual money provided to the client device in a virtual object and virtual money collection associated with the client device.
  • receiving a plurality of requests from a plurality of advertisers or sponsors, each of the plurality of requests requesting that a virtual object or virtual element associated with the request be included at a geofence boundary in the virtual world corresponding to a geofence boundary of a business in the real world, the business associated with an advertiser or sponsor that provided the request to the game server from the plurality of advertisers or sponsors; selecting, by the game server, at least one request from the plurality of requests; responsive to selecting the at least one request, modifying, by the game server, game data to include the virtual object or virtual element associated with the at least one request in the game at the geofence boundary in the virtual world requested by the at least one request; and providing, by the game server, the modified game data to a client device of a player.
  • a game having a virtual world geography that correspondences the real world geography and having a virtual world environment that correspondences the real world environment, as a result, based on monitoring, tracking, identifying, recognizing, detecting, analyzing, processing, logging and storing as the player conducting of or providing of information about or automatically logging of user’s or player’s one or more types of activities, actions, participations in events, providing of status, visiting or interact with one or more types of locations or places, interacted with one or more types of entities, contacts, conducting one or more types of transactions with one or more types of entities, conducting one or more types of digital activities, actions, senses, behaviours, interactions, status, reactions, call-to-actions, transactions, sharing, communications, collaborations, connections in the real world and/or digital world including websites, applications, the player can also interacted and connected, followed, related, mapped, associated with said entities in virtual world as a relative, friend, class mate, colleague, partner, employer, employee, neighbor, society member, citizens, native,
  • virtual representation on entity can play with player in virtual world, provide virtual objects in virtual world that can be used in virtual world, provide virtual reward in virtual world that can redeem in real world, sell virtual goods in virtual world, sell, present, provide support, market, and advertise real products and services in virtual world.
  • virtual avatar of player can directly or virtually reach at any places related to said one or more types of entities.
  • virtual avatar of player can directly or virtually reach to any or related or connected one or more types of entities in virtual world
  • virtual avatar of player associated real world user need to physically reach at particular place related to particular named or type of entity to virtual reach at virtual representation of said real world particular named or type of entity.
  • EXIF Exchangeable image file format
  • object criteria including object model or image or object keywords comprises image related to particular named character or actors or actress, type or brand related cloths or jewelry or accessories, name or image of music composer or singer, image or name of product, service, location or place, logo, brand, company, advertiser or sponsor, person, shop, hotel, restaurant, tourist place or location, or any type of identified or named or type of entity or scene in real world.
  • object criteria including object model or image or object keywords comprises image related to particular named character or actors or actress, type or brand related cloths or jewelry or accessories, name or image of music composer or singer, image or name of product, service, location or place, logo, brand, company, advertiser or sponsor, person, shop, hotel, restaurant, tourist place or location, or any type of identified or named or type of entity or scene in real world.
  • enabling user to search and select one or more locations or places or enabling user to search and select one or more locations or places or define or draw geo-fence boundary surround one or more locations or places on map and enabling user to associate instruction or hint with said one or more locations or places.
  • enabling user to associate one or more virtual objects with said defined real world object provide schedules of availability of said virtual objects, provide or associate one or more virtual objects with said instruction or hint, provide or associate one or more rules with said instruction or hint or getting or capturing or winning of said virtual objects.
  • receiving from the user scanned or photo of receipt of the purchase based on receipt, validate actual purchase of one or more product or service by user from said business including unique business name, place or location, date & time of purchase, amount of purchase, quantity and names and details of one or more products or services, wherein identifying
  • Exchangeable image file format (EXIF) data in scanned or photo of receipt including original date & time of scanned or captured photo of receipt and sufficiently match said extracted or identified original date & time with server’s current date & time to validate or check or verify originality of captured photo or recorded video; identifying monitored or tracked location or place of user device or identifying entered and stays in geo-fence boundary at the time of sending of scanned or photo of receipt, identifying or recognizing unique business name, place or location, date & time of purchase, amount of purchase, quantity names and details of one or more products or services from received scanned or photo of receipt based on object recognition and Optical Character Recognition (OCR) technologies; and responsive to the client device being within a set distance of the location of the particular business in the real world, providing, by the server, said location or place associated business associated one or more types of virtual objects or virtual elements or a reward to the user or player.
  • OCR Optical Character Recognition
  • a game having a virtual world geography that correspondences the real world geography, as a result, as the player continuously moves about or navigates in a range of coordinates in the real world, the player also continuously moves about in a range of coordinates in the real world map or virtual world; accessing real world object associated virtual object data, location or geofence information and object criteria, generating a virtual object using the virtual object data; storing data specifying an association between the real world object associated virtual object, location or geofence information and the object criteria; responsive to the client device being within a predefined geofence boundary or a set distance of the location of the business in the real world, notifying or displaying the information about one or more types of one or more real world objects to the user or player; display information about one or more types of one or more real world objects; detect that a client device of the server has conducting scanning or augmented reality (AR) scanning or taken a photograph or provide a raw photo or scanned data from particular real world object location; identifying or
  • AR augmented reality
  • accessing virtual object data and associated object criteria generating a virtual object using the virtual object data; storing data specifying an association between the virtual object and the object criteria; detecting that a first client device of the server from particular group has conducting scanning or augmented reality (AR) scanning or taken a photograph or provide a raw photo or scanned data; detecting that a second client device of the server from particular group has conducting scanning or augmented reality (AR) scanning or taken a photograph or provide a raw photo or scanned data; identifying or recognizing an object in the photograph or scanned data received from first or second or each client device of the server from particular group; and based on the identified object satisfying the object criteria associated with the virtual object in the stored data, displaying or providing the virtual object to the first or second or each client device of the group or make eligible to first or second or each client device of the group to claim, win, play game and win, play lottery or lucky draw contest or puzzle or provide answer of question to win, battle, capture, select, add to collection, and get one or more virtual objects.
  • AR augmented reality
  • dynamically create group within particular session based on sufficiently matching monitored or tracked current location of user devices with pre-defmed location of real world object or geofence surround pre-defmed location of real world object or user devices within particular range of location or enter into or stay within or stay for pre-set duration particular pre-defmed geofence and adding to dynamically created location or place or real world object specific group.
  • notifying or displaying the information about participating users in current session play or notifying or displaying the information about participant group members to the user or player in response to the client device being within a set distance of the location of the real world object or enters into or stay or dwell for particular duration within geofence.
  • a virtual world geography or real world map interface that correspondences the real world geography or real world map including locations, places, as a result, as the user continuously moves about or navigates in a range of coordinates in the real world based on monitoring and tracking current location of the client device, the user also continuously moves about in a range of coordinates in the real world map or virtual world.
  • monitoring, tracking, recording, processing, logging and storing information about user visited places or currently visiting place in real-world based on monitoring and tracking current location of user device and said monitored or tracked current location or place associated information including location or place name and details, routes and directions.
  • receiving from a user selection or marking of particular location or place or current location or place as starting point or particular location or place as end point or automatically determining or selecting or marking particular location or place as starting point and end point.
  • receiving from user one or more types of information including one or more photos, videos, voice, audio, images, text, web address or links, location or place information, multimedia, animations, emoticons or stickers, voice commentary, comments or notes and one or more types of structured data including dynamically displayed form or field specific data, contents and multimedia and receiving information about or monitoring, tracking and logging information related to user’s one or more types of activities, actions, events, participations, senses, behaviours, communications, collaborations, sharing, status, transactions associated with particular place.
  • a first content item of the set of content items for a first view period of time defined by a timer wherein the first content item is hide when the first view period of time expires; receive from a touch controller a haptic contact signal indicative of a gesture applied to the display during the first view period of time; wherein the content presentation controller hides the first content item in response to the haptic contact signal and proceeds to present on the display a second content item of the set of content items for a second view period of time defined by the timer, wherein the content item controller hides the second content item upon the expiration of the second view period of time; wherein the second content item is hides when the touch controller receives another haptic contact signal indicative of another gesture applied to the display during the second view period of time; and wherein the content presentation controller initiates the timer upon the display of the first content item and the display of the second content item.
  • a virtual world geography or real world map interface that correspondences the real world geography or real world map including locations, places, as a result, as the user continuously moves about or navigates in a range of coordinates in the real world based on monitoring and tracking current location of the client device, the user also continuously moves about in a range of coordinates in the real world map or virtual world; monitoring, tracking, recording, processing, logging and storing information about user visited places or currently visiting place in real-world based on monitoring and tracking current location of user device and said monitored or tracked current location or place associated information including location or place name and details, routes and directions; receiving from a user, selection or marking of particular location or place or current location or place as starting point or particular location or place as end point or automatically determining or selecting or marking particular location or place as starting point and end point; receiving from user one or more types of information or contents including one or more photos, videos, voice, audio, images, text, web address or links, location or place information, multimedia, animations, emoticons or stickers, voice commentary,
  • 3D or multi-dimensional animated graphics or 3D or multi-dimensional simulation ; and displaying said generated 3D or multi-dimensional animations or 3D or multi-dimensional animated graphics or 3D or multi-dimensional simulation inside said 3D or multi-dimensional place of activity in 3D or multi-dimensional virtual world geography or 3D or multi -dimensional real world map interface.
  • GPS Global Positioning System
  • change or update or select avatar of user based on type of activity including if user traveling via particular type of vehicle then change avatar or image depicting user is travelling, eating particular type of food then change avatar or image depicting user is eating particular type of food.
  • displaying shared story to one or more contacts, followers, one or more selected types or criteria specific users of network save as private, and make as public.
  • GPS Global Positioning System
  • One or more embodiments described herein provide that methods, techniques, and actions performed by a computing device are performed programmatically, or as a computer- implemented method.
  • Programmatically means through the use of code or computer-executable instructions. These instructions can be stored in one or more memory resources of the computing device.
  • a programmatically performed step may or may not be automatic.
  • One or more embodiments described herein can be implemented using programmatic modules, engines, or components.
  • a programmatic module, engine, or component can include a program, a sub-routine, a portion of a program, or a software component or a hardware component capable of performing one or more stated tasks or functions.
  • a module or component can exist on a hardware component independently of other modules or components.
  • a module or component can be a shared element or process of other modules, programs or machines.
  • Some embodiments described herein can generally require the use of computing devices, including processing and memory resources.
  • computing devices such as servers, desktop computers, cellular or smartphones, personal digital assistants
  • Memory, processing, and network resources may all be used in connection with the establishment, use, or performance of any embodiment described herein
  • one or more embodiments described herein may be implemented through the use of instructions that are executable by one or more processors. These instructions may be carried on a computer-readable medium.
  • Machines shown or described with figures below provide examples of processing resources and computer-readable mediums on which instructions for implementing embodiments of the invention can be carried and/or executed.
  • the numerous machines shown with embodiments of the invention include processor(s) and various forms of memory for holding data and instructions.
  • Examples of computer-readable mediums include permanent memory storage devices, such as hard drives on personal computers or servers.
  • Other examples of computer storage mediums include portable storage units, such as CD or DVD units, flash memory (such as carried on smartphones, multifunctional devices or tablets), and magnetic memory.
  • Computers, terminals, network enabled devices are all examples of machines and devices that utilize processors, memory, and instructions stored on computer-readable mediums. Additionally, embodiments may be implemented in the form of computer-programs, or a computer usable carrier medium capable of carrying such a program.
  • Figure 1 is a network diagram depicting a network system having a client-server architecture configured for exchanging data over a network implementing various embodiments of enabling user or player to conduct augmented reality scanning of real world objects or capturing photo of real world objects and sending said scanned data or raw photo or captured photo or image to server which server identifies or recognizes and validates and displays and add or store associated one or more virtual objects to user’ account related system.
  • server receives from client device photos or videos for recognizing or identifying user’s activities in real world and displaying or adding and storing associated one or more virtual objects to user’s account related system.
  • server monitors and tracks digital activities, actions, events, transactions of user related to interacted, participated, transacted external websites, applications and services and displaying or adding and storing associated one or more virtual objects to user’s account related system.
  • Figure 1 depicts an exemplary computer-based system for implementing a location-based game according to an exemplary embodiment of the present disclosure.
  • enabling to send message including instruction, assign or request or suggest task to one or more target recipients;
  • Figure 2 illustrates components of an electronic device implementing various embodiments of enabling to scan real world objects and send to server for processing including identifying or recognizing and validating real world objects and displaying associated virtual objects and adding and storing associated one or more virtual objects to user’s account related system.
  • enabling to send message including instruction, assign or request or suggest task to one or more target recipients
  • Figures 3-6 illustrates example Graphical User interface (GUI) for enabling user or administrator of sever to provide various types of details about real world objects or define real world objects and set or apply or select or provide various types of object criteria, target criteria, settings, preferences, customization, configuration, rules, and actions, upload associated virtual objects, order and make payment, conduct verification, system configuration, updates, moderation and validation.
  • GUI Graphical User interface
  • Figure 7 shows a flow diagram illustrating one example embodiment of an operation of the virtual object publication module.
  • FIG. 8-14 illustrates various examples of various embodiments of the present disclosure
  • FIG. 15 illustrates example Graphical User interface (GUI) for enabling user view various types of notifications.
  • GUI Graphical User interface
  • Figure 16 illustrates example Graphical User interface (GUI) for enabling user to submit activity specific one or more photos or videos to server for verification and with intention to receive associated virtual objects.
  • GUI Graphical User interface
  • Figure 17 illustrates example Graphical User interface (GUI) for enabling user to access, view, connect, communicate, follow, play, share with, view information or posts or messages, receive one or more types of virtual objects from real world related or interacted or connected or transacted or participated one or more types of entities in virtual world.
  • GUI Graphical User interface
  • Figure 18 illustrates example 3D or 2D Map Graphical User interface (GUI) for enabling user to access, view, connect, communicate, follow, play, share with, view information or posts or messages, receive one or more types of virtual objects from real world related or interacted or connected or transacted or participated one or more types of entities in real world map or virtual world corresponding to geography of real world.
  • GUI Graphical User interface
  • FIGS 19-21 illustrates example Graphical User interface (GUI) for enabling user to take one or more types of actions, call-to-actions, reactions, conduct transactions and in the event of taking one or more types of activities, actions, participations, call-to-actions, reactions, conducting of transactions, view or get or capture or win or acquire associated displayed one or more types of virtual objects.
  • Figures 22-26 illustrates example Graphical User interface (GUI) for enabling user to view, access, manage collections of plurality types of virtual objects got, won, caught, purchased by user, associated particular amount of virtual money or points.
  • Figure 27 illustrates example Graphical User interface (GUI) for enabling user to prepare, draft, select message, wherein message comprises task description, requirement specification, instruction, request to identify, search and get instruction specific real world object or augmented reality scan instruction specific real world object and get, win, catch, capture associated one or more types of virtual objects.
  • GUI Graphical User interface
  • Figures 28 illustrates example Graphical User interface (GUI) enabling user to view received message, take one or more types of user actions or call-to-actions including accept or reject received message, identify, search and get instruction specific real world object or augmented reality scan instruction specific real world object and get, win, catch, capture associated one or more types of virtual objects and provide one or more types of status details to instruction sender.
  • GUI Graphical User interface
  • Figure 29 illustrates example Graphical User interface (GUI) enabling user to view one or more types of status details related to send message, take one or more types of user actions or call-to- actions, providing one or more types of reactions.
  • GUI Graphical User interface
  • Figure 30 illustrates example Graphical User interface (GUI) enabling user to select one or more types of playing mode, send or accept invitations, define or update rules, create, participate in and manage one or more teams and view scores, statistics, status and various types of logs.
  • GUI Graphical User interface
  • Figure 31 illustrates example Graphical User interface (GUI) enabling user to select, set, apply and update one or more types of privacy settings, preferences, and rules and configure and consume one or more types of services.
  • GUI Graphical User interface
  • Figures 32-34 illustrates example Graphical User interface (GUI) enabling user to define geofence and associated virtual objects, criteria, authorized users to view associated virtual objects and details, preferences, schedules, required actions, call-to-actions, rules and playing of one or more types of mini games.
  • Figures 35-36 illustrates example Graphical User interface (GUI) enabling user to view virtual objects when user enters in to said pre-defmed geofence boundary or stay for pre-set duration within said pre-defmed geofence boundary.
  • GUI Graphical User interface
  • Figure 37 illustrate displaying 3D map of real world outdoor and indoor view for enabling real world player’s virtual avatar to visits near shop or restaurant or place of business, wherein 3D real world map also contains virtual objects.
  • Figures 37 illustrate virtual world geography that corresponds to real world geography and which displays virtual objects which may use in virtual world and/or may redeemable in real world.
  • Real world player can select, get, win, capture, acquire, claim, add to user’s collection of virtual objects and play mini game to capture said displayed virtual objects, virtual money and virtual rewards including voucher, redeemable points, coupons, offers, gift, sample, cash backs, and discount which may redeemable in real world.
  • Figures 38 illustrate displaying particular location of real object specific virtual objects in virtual world.
  • Figure 39 illustrates example Graphical User interface (GUI) enabling user to provide, set, apply and update on or more types of settings, preferences, privacy settings.
  • GUI Graphical User interface
  • Figure 40 illustrates example Graphical User interface (GUI), displaying exemplary home screen which facilitates accessing of all features of game, notifications, map, feed, activity feed applications.
  • GUI Graphical User interface
  • Figure 41 illustrates example Graphical User interface (GUI) displaying user profile, status, levels, number of or amount of virtual money or virtual currency, number and types of virtual objects, score, various types of statistics.
  • GUI Graphical User interface
  • Figure 42 illustrates example Graphical User interface (GUI) displaying feed items or one or more types of published contents or media or posts from one or more sources including connected users, followers, members of team, user specific auto matched sources, and preferences specific sources.
  • GUI Graphical User interface
  • Figure 43 illustrates example Graphical User interface (GUI) showing map story interface for enabling user to create, view monitored or tracked, edit, provide one or more types of contents, update, manage, test, publish or live publish and share map story or story on map or story with map or feed or feed items including story associated with one or more places, locations, point of interests, between visited places.
  • GUI Graphical User interface
  • Figure 44-46 illustrates example 2D or 3D or multi-dimensional Graphical User interface (GUI) displaying of one or more types of map story or story on map or story with map related to or shared by one or more contacts or users of network for enabling user to view, play, pause, stop, view as per user device’s current location, go-to start, go-to end, rewind, forward, fast forward, directly click on particular place or content item (e.g.
  • GUI Graphical User interface
  • phot or video or text and view, play from or jump to start, from particular point or duration or location or place, provide one or more types of reactions, view routes and directions, view place associated or shared one or more types of contents including one or more photos, videos, live video, structured contents (structured contents may provide via fields and associated one or more types of values, data and contents or forms or dynamically presented forms), voice, images, links or web addresses, text, animations, 3D contents, multimedia, emoticons, stickers, emoji, place information provided by publisher or sharing user and/or server.
  • user can view combined stories of one or more selected users who shared stories related to similar places.
  • FIG. 47 is a block diagram that illustrates a mobile computing device upon which embodiments described herein may be implemented.
  • FIG. 1 illustrates an example augmented reality scanning or taking photo of real world object and getting associated virtual object platform, under an embodiment.
  • system 100 can be implemented through software that operates on a portable computing device, such as a mobile computing device 200.
  • System 100 can be configured to communicate with one or more network services, databases, objects that coordinate, orchestrate or otherwise receiving, processing, storing information, data, preferences, settlings, location, target criteria, object criteria, virtual objects, collection rules and actions related to real world objects and searching, accessing and displaying real world objects and associated information and processing augmented reality scanning of object(s), face, body parts, voice, and Optical
  • the mobile computing device can integrate third-party services which enable further functionality through system 100.
  • the system for augmented reality scanning or taking photo of real world object and get associated virtual object platform User is enabled to define one or more types of real world object including product, animal, bird, flower, art, sculpture, item, accessory, type of scene and provide associated information (discuss in detail in figures 3-6). User is also enabled to do augmented reality scanning or taking photo of real world object and get associated virtual object (discuss in detail in figures 7-14). While FIG. 1 illustrates a gateway 120, a database 115 and a server 110 as separate entities, the illustration is provided for example purposes only and is not meant to limit the configuration of the augmented reality scanning or taking photo of real world object and getting associated virtual object system. In some embodiments, gateway 120, database 115 and server 110 may be implemented in the augmented reality scanning or taking photo of real world object and getting associated virtual object system as separate systems, a single system, or any combination of systems.
  • the augmented reality scanning or taking photo of real world object and getting associated virtual object system may include a real world object definer or information provider user device or mobile devices 130/140 and augmented reality scanning or taking photo of real world object and getting associated virtual object user device or mobile devices 135/ 145.
  • Devices or Mobile devices 130/140/135/145 may be particular set number of or an arbitrary number of devices or mobile devices which may be capable of providing information, settings, criteria, actions related to one or more types of real world object including product, animal, bird, flower, art, sculpture, item, accessory, type of scene (discuss in detail in figures 3-6) and conduct augmented reality scanning or taking photo of real world object and get associated virtual object (discuss in detail in figures 7-14).
  • Each device or mobile device in the set of a real world object definer or information provider user device or mobile devices 130/140 and augmented reality scanning or taking photo of real world object and getting associated virtual object user device or mobile devices 135/140 may be configured to communicate, via a wireless connection, with each one of the other mobile devices 130/140/135/145.
  • 130/140/135/145 may also be configured to communicate, via a wireless connection, to a network 125, as illustrated in FIG. 1.
  • 130/140/135/145 may be implemented within a wireless network such as a Bluetooth network or a wireless LAN.
  • the augmented reality scanning or taking photo of real world object and getting associated virtual object system may include gateway 120.
  • Gateway 120 may be a web gateway which may be configured to communicate with other entities including advertiser, sponsors, and service providers of the augmented reality scanning or taking photo of real world object and getting associated virtual object system via wired and/or wireless network
  • gateway 120 may communicate with mobile devices 130/140/135/145 via network 125. In various embodiments, gateway 120 may be connected to network 125 via a wired and/or wireless network connection. As illustrated in FIG. 1, gateway 120 may be connected to database 115 and server 110 of user to user connection system. In various embodiments, gateway 120 may be connected to database 115 and/or server 110 via a wired or a wireless network connection.
  • Gateway 120 may be configured to receive information about real world object, associated settings, criteria, object criteria, and virtual object(s) and rules and required actions to get virtual object(s), send information about real world objects, receive the augmented reality scanned data or captured photo of real world object, send recognized object in received scanned data or captured photo of real world object associated virtual object(s), send and receive message or task or instruction or request and message or task or instruction or request specific one or more types of content including photo or video, search results, notifications, shared or published contents, user data, wherein user data comprises user requests, user profile, user connections or contacts, connected users’ data, user shared data or contents, user’s logs, monitored or tracked information about user’s one or more types of activities, actions, events, senses, transactions, status, updates, presence information, locations, check-in places and like to/from mobile devices 130 / 140 / 135 / 145.
  • gateway 120 may be configured to store information related to real world objects and associated settings, criteria, object criteria, and virtual object(s) and rules and required actions to get virtual object(s) and augmented reality scanned data or captured photo of real world object to database 115 for storage.
  • gateway 120 may be configured to send or present request specific information about real world objects to requestor or target recipients from stored database 115 to mobile devices 130/140/135/145.
  • Gateway 120 may be configured to receive requests from mobile devices 130/140/135/145 to process augmented reality scanned data or captured photo of real world object for identifying and displaying virtual objects.
  • gateway 120 may receive a request from a mobile device and may query database 115 with the request for searching and matching request specific information about real world objects or one or more types of contents including photos, videos. For example, gateway 120 may receive a request from a mobile device to process scanned data or captured photo of real world object. Gateway 120 may be configured to inform server 110 of updated data. For example, gateway 120 may be configured to notify server 110 when new information about real world object or scanned data or captured photo of real world object has been received from a mobile device stored on database 115.
  • the augmented reality scanning or taking photo of real world object and getting associated virtual object system may include a database, such as database 115.
  • Database 115 may be connected to gateway 120 and server 110 via wired and/or wireless connections.
  • Database 115 may be configured to store a database of registered user’s profile, accounts, logged activities, indexes, information about real world object, associated settings, criteria, object criteria, and virtual object(s) and rules and required actions to get virtual object(s), the augmented reality scanned data or captured photo of real world object, one or more types of user related or associated data, payments information received from mobile devices 130/140/135/145 via network 125 and gateway 120.
  • Database 115 may also be configured to receive and service requests from gateway 120.
  • database 115 may receive, via gateway 120, a request from a mobile device and may service the request by providing, to gateway 120, user profile, user data, user account’s related data, information about real world objects, recognized object in received scanned data or captured photo of real world object associated virtual object(s), notifications, messages, contents which meet the criteria specified in the request.
  • Database 115 may be configured to
  • the augmented reality scanning or taking photo of real world object and getting associated virtual object system may include a server, such as server 110.
  • Server may be connected to database 115 and gateway 120 via wired and/or wireless connections.
  • server 110 may be notified, by gateway 120, of new or updated information about real world objects, the augmented reality scanned data or captured photo of real world object, message, instruction, request, user requests including search request and invitations, connection request, user profile, user data, user posted or shared or send contents, user contacts and various types of status stored in database 115.
  • FIG. 1 illustrates a block diagram of the augmented reality scanning or taking photo of real world object and getting associated virtual object system configured to implement the platform where user(s) can provide information about real world object, associated settings, criteria, object criteria, and virtual object(s) and rules and required actions to get virtual object(s), access information about real world objects, receive the recognized augmented reality scanned data or captured photo of real world object associated virtual object(s).
  • FIG. 1 illustrates a gateway 120, a database 115 and a server 110 as separate entities, the illustration is provided for example purposes only and is not meant to limit the configuration of the augmented reality scanning or taking photo of real world object and getting associated virtual object system.
  • gateway 120, database 115 and server 110 may be implemented in the augmented reality scanning or taking photo of real world object and getting associated virtual object system as separate systems, a single system, or any combination of systems.
  • Figure 2 illustrates an electronic device 200 implementing operations of the invention.
  • the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
  • the processor 230 may be a central processing unit and/or a graphics processing unit.
  • the memory 236 is a combination of flash memory and random access memory.
  • the memory 236 stores Prepare and provide or submit information about real world objects and associated settings, object criteria, location information, schedules, virtual object(s) and required user actions Application (Form / GUI / Map) (Smart Client or Web based) 270 to implement operations of one of the embodiment of the invention.
  • the Prepare and provide or submit information about real world objects and associated settings, object criteria, location information, schedules, virtual object(s) and required user actions Application (Form / GUI / Map) (Smart Client or Web based) 270 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Prepare and provide or submit information about real world objects and associated settings, object criteria, location information, schedules, virtual object(s) and required user actions Application (Form / GUI /
  • Map) Application (Smart Client or Web based) 270 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
  • the memory 236 stores an Augmented Reality Scanning Application (Smart Client or Web based) 271 to implement operations of one of the embodiment of the invention.
  • the Augmented Reality Scanning Application (Smart Client or Web based) 271 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Augmented Reality Scanning Application (Smart Client or Web based) 271 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
  • the memory 236 stores a Media Taking (Capture Photo, Recording Video) Application (Smart Client or Web based) 272 to implement operations of one of the embodiment of the invention.
  • the Media Taking (Capture Photo, Recording Video) Application (Smart Client or Web based) 272 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Media Taking (Capture Photo, Recording Video) Application (Smart Client or Web based) 272 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
  • the memory 236 stores a Notifications Application (Smart Client or Web based) 274 to implement operations of one of the embodiment of the invention.
  • the Notifications Application (Smart Client or Web based) 274 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Notifications Application (Smart Client or Web based) 274 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
  • a touch controller 215 is connected to the display 210 and the processor 230.
  • the touch controller 215 is responsive to haptic signals applied to the display 210.
  • the electronic device 200 may also include other components commonly associated with a smartphone, such as a wireless signal processor 220 to provide connectivity to a wireless network.
  • a power control circuit 225 and a global positioning system (Global Positioning System (GPS)) processor 235 may also be utilized. While many of the components of Figure 2 are known in the art, new functionality is achieved through the providing information about real world object and associated settings, criteria, object criteria, and virtual object(s) and rules and required actions to get associated virtual object(s) 270, access information about real world objects 273, send augmented reality scanned data or captured photo of real world object and receive recognized augmented reality scanned data or captured photo of real world object associated virtual object(s) 271 operating in conjunction with a server.
  • GPS Global Positioning System
  • FIG. 2 shows a block diagram illustrating one example embodiment of a mobile device 200.
  • the mobile device 200 includes an optical sensor 240 or image sensor 238, a Global Positioning System (Global Positioning System (GPS)) sensor 244, a position sensor 242, a processor 230, storage 236, and a display 210.
  • GPS Global Positioning System
  • the optical sensor 240 includes an image sensor 238, such as, a charge-coupled device.
  • the optical sensor 240 captures visual media.
  • the optical sensor 240 can be used to media items such as pictures and videos.
  • the Global Positioning System (GPS) sensor 244 determines the geolocation of the mobile device 200 and generates geolocation information (e.g., coordinates including latitude, longitude, aptitude).
  • geolocation information e.g., coordinates including latitude, longitude, aptitude
  • other sensors may be used to detect a geolocation of the mobile device 200.
  • a WiFi sensor or Bluetooth sensor or Beacons including iBeacons or other accurate indoor or outdoor location determination and identification technologies can be used to determine the geolocation of the mobile device 200.
  • the position sensor 242 measures a physical position of the mobile device relative to a frame of reference.
  • the position sensor 242 may include a geomagnetic field sensor to determine the direction in which the optical sensor 240 or the image sensor 238 of the mobile device is pointed and an orientation sensor 237 to determine the orientation of the mobile device (e.g., horizontal, vertical etc.).
  • the processor 230 may be a central processing unit that includes a media capture application 271.
  • the media capture application 272 includes executable instructions to generate media items such as pictures and videos using the optical sensor 240 or image sensor 238.
  • the media capture application 272 also associates a media item with the geolocation and the position of the mobile device 200 at the time the media item is generated using the Global Positioning System (GPS) sensor 244 and the position sensor 242.
  • GPS Global Positioning System
  • the storage 236 includes a memory that may be or include flash memory, random access memory, any other type of memory accessible by the processor 230, or any suitable combination thereof.
  • the storage 236 stores the prepared or provided or submitted information about real world object, associated settings, criteria, object criteria, and virtual object(s) and rules and required actions to get virtual object(s), conducted augmented reality scanned data or captured photo of real world objects, received recognized object in scanned data or captured photo of real world object associated virtual object(s), the media items generated or shared or received by user and also store the corresponding geolocation information, exchangeable image file format (EXIF) data in the case of image files from cameras, smartphones and scanners, auto identified system data including date & time, auto recognized objects in photo or image(s) of video associated keywords, metadata, user profile, one or more types of user data and game data, and user provided information.
  • EXIF exchangeable image file format
  • the storage 236 also stores executable instructions corresponding to Prepare and provide or submit information about real world objects and associated settings, object criteria, location information, schedules, virtual object(s) and required user actions
  • Application Form / GUI / Map Application
  • Smart Client or Web based 270
  • Augmented Reality Scanning and receive recognized augmented reality scanned data or captured photo of real world object associated virtual object(s) Application Smart Client or Web based
  • Media Taking Capture Photo, Recording Video
  • Application Smart Client or Web based
  • Access Information about real world objects Application Smart Client or Web based
  • the display 210 includes, for example, a touch screen display.
  • the display 210 displays the media items generated by the media capture application 271.
  • a user can conduct augmented reality scanning of real world object(s) and can take picture of real world object(s) by touching the corresponding media items on the display 210.
  • a touch controller monitors signals applied to the display 210 to coordinate the augmented reality scanning or scanning capturing, recording, and selection of the media items.
  • the mobile device 200 also includes a transceiver that interfaces with an antenna.
  • the transceiver may be configured to both transmit and receive cellular network signals, wireless data signals, or other types of signals via the antenna, depending on the nature of the mobile device 200.
  • the Global Positioning System (GPS) sensor 238 may also make use of the antenna to receive Global Positioning System (GPS) signals.
  • GPS Global Positioning System
  • the electronic device 200 is a smartphone with a processor 230 in communication with a memory 236.
  • the processor 230 may be a central processing unit and/or a graphics processing unit.
  • the memory 276 is a combination of flash memory and random access memory. The memory 276 stores Message or Request or Instruction Preparing, Selecting,
  • the Message or Request or Instruction Preparing, Selecting, Saving, Applying Rules, Settings, Preferences and Sending Application (Smart Client or Web based) 276 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Message or Request or Instruction Preparing, Selecting, Saving, Applying Rules, Settings, Preferences and Sending Application (Smart Client or Web based) 276 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
  • the memory 236 stores a Received Message Management including Accept, Reject, Received Message Specific identifying, scanning or taking image of real world object(s) (Smart Client or Web based) 277 to implement operations of one of the embodiment of the invention.
  • the Received Message Management including Accept, Reject, Received Message Specific identifying, scanning or taking image of real world object(s) Application (Smart Client or Web based) 277 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein.
  • the Received Message Management including Accept, Reject, Received Message Specific identifying, scanning or taking image of real world object(s) Application (Smart Client or Web based) 277 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
  • the memory 236 stores a Team Management Application (Smart Client or Web based) 279 to implement operations of one of the embodiment of the invention.
  • the Team Management Application (Smart Client or Web based) 274 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein.
  • the Team Management Application (Smart Client or Web based) 279 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
  • the memory 236 stores a Settings, Preferences & Rules
  • Smart Client or Web based may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Settings,
  • Preferences & Rules Management Application (Smart Client or Web based) 280 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
  • the memory 236 stores a Feed Application 281 to implement operations of one of the embodiment of the invention.
  • the Feed Application 281 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Feed Application 281 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
  • the memory 236 stores a Display or Live Update of Real world Story of user on or with or within 2D or 3D Map Application 282 to implement operations of one of the
  • the Display or Live Update of Real world Story of user on or with or within 2D or 3D Map Application 282 may include executable instructions to access a client device and/or a server which coordinates operations disclosed herein. Alternately, the Display or
  • Live Update of Real world Story of user on or with or within 2D or 3D Map Application 282 may include executable instructions to coordinate some of the operations disclosed herein, while the server implements other operations.
  • FIG. 3 illustrates example Graphical User Interface (GUI) 270 for enabling user 305 or server administrator 303 to select particular place or location 310 on map 302 or select accurate location or position of particular object 315 / 325 of real world or visit place 310 and get or identify Global Positioning System (GPS) coordinates, longitude, latitude, altitude 338 of particular object 315 / 325 of real world based on monitored or tracked current location of user device or manually identify and provide Global Positioning System (GPS) coordinates, longitude, latitude, altitude 338 of particular object 315 / 325 of real world, define or draw on map geo-fence boundaries 373 surround said identified and defined object 315 / 325 of real world, set distance of the location of the real world object 315 / 325 and the player or user, wherein the virtual object is displayed or provided to the player or user or add to user’s collection or store to user’s account in response to the client device being within said set distance of the location of the real world object.
  • GUI Graphical User Interface
  • other sensors may be used to detect a geolocation of the mobile device 200.
  • a WiFi sensor or Bluetooth sensor or Beacons including iBeacons or other accurate indoor or outdoor location determination and identification technologies can be used to determine the geolocation of the mobile device 200.
  • User 305 or server 303 administrator can capture or record or select and provide one or more photos 318 or videos 319 of object 315 / 325 of real world, provide or select object name 332, object type or category or sub-category or taxonomy 335, physical address 336 of object 315 /
  • User 305 or server 303 administrators is enabled to provide details or description or structured details 340 of said real world object 325, provide or associated with said object 325, one or more hints or tips or clue 339 (in an embodiment hit or tips or clue will not see by other users).
  • 305 or server 303 administrators is enabled to select or add one or more new fields 342 and provide each added field specific one or more types of values or data or one or more types of contents or media.
  • User 305 or server 303 administrators is enabled to provide or select object related one or more object keywords 344, provide or upload or add 345 or design or draw or edit
  • exemplary user of network 305 limits users of network or exemplary user 305 to provide one or more types of information. For example in case of limiting providing of information, after providing of information by exemplary user of network 305 in window or interface 357, exemplary user of network 305 is enable to save as draft for later editing or submission 354 or edit already drafted
  • server administrator 303 enable or authorize only server administrator 303 to provide said one or more types of information.
  • map e.g. 512
  • geo-fence boundary e.g. 515
  • enabling or authorizing server administrator 303 or in another embodiment enabling user of network 305 to define or set whether user needs to Augmented Reality Scanning of real world object 315 / 325 or capture photo real world object 315 / 325 and send or use camera display to view or scan real world object 315 / 325 and provide raw photo real world object 315 / 325 to server module 151 of server 110 for processing, recognizing, validating, identifying and displaying associated one or more virtual objects 366 and associated virtual money 364 and other one or more types of data and metadata or in an embodiment enable or authorizing server administrator 303 or in another embodiment enabling user of network 305 to define or set whether display virtual object(s) and associated virtual money anywhere within predefined geo-fence boundary 373 (e.g.
  • enabling or authorizing server administrator 303 or in another embodiment enabling user of network 305 to hide information about real world object 325 and show only location or place or address of place information (so player need to search, locate real world object and guess or try scanning of objects to identify object which is associated with virtual object(s)) 375 or to hide 374 said details and location about said object of real world 315 / 325 for users of network for enabling them to identify or view and collect or get said real world object 315 / 325 associated virtual object 366 based on guess, or display based on luck or random or lottery or based on deciphering clue or tips or hints.
  • real world object e.g. particular food item or real world object e.g. food item with displayed virtual object
  • 391 including feedback, comments, user’s one or more types of profile or details, survey form, need to provide contact details, need to refer 392 product or service of sponsor’s business to particular number of friends or contacts of user, need to share 392 details or photo or video of product or service of sponsor’s business, invite friends 392 to visit place of sponsor’s business, register 393 with web site of sponsor’s business, install application 395 of sponsor’s business, provide comments or feedback or reviews 396 of products or services of sponsor’s business, need to take one or more types of reactions including like, dislike, provide one or more types of emoticons 397, need to view particular duration of presentation of products and services or business of sponsor 399, follow 321 business of sponsor including shop, company, product, service, need to add to favorite or contact list 322 or conduct one or more types of actions as per defined rules 376 or to-do as per defined one or more types of rules 376.
  • user of network who scanned said real world object 315 / 325 will automatically get associated virtual object 366 and/or associated virtual money 364.
  • user of network who scanned said real world object 315 / 325 will need to play said set or selected mini game to get or collect or capture said real world object 315 / 325 associated virtual object 366 and/or associated virtual money 364.
  • enabling or authorizing server administrator 303 or in another embodiment enabling user of network 305 to take photo of user with real world object 315 or 325 or real world obj ect 315 or 325 and submit to sever module 151 of server 110 and in the event of matching submitted photo associated Exchangeable image file format (EXIF) data associated captured photo date and time plus additional duration for submitting photo (which must be within provided or pre-set maximum duration to submit captured photo) with server’s current date and time, provided or add or store real world object 315 or 325 associated virtual object
  • EXIF Exchangeable image file format
  • server module 151 receives and verifies and makes available for other users of network (discuss in detail in figures 8-9).
  • server module 151 receives and verifies and makes available for other users of network (discuss in detail in figures 8-9).
  • real world object including product, place of business, board name, showcase display item or product, art, sculpture, design, food item, product, person in-shop and logo or brand or name
  • user can make payment 389 and in an embodiment in the event of user of network, user need to submit said provided information to server module 151 for processing, moderation, verification, validation, applying needed settings and after successfully validation and verification making them available for other users of network.
  • server module 151 received said information from user and enables sever administrator 303 to review said information and after successfully reviewing, moderation, verification, validation, applying needed settings, server administrator mark said information as verified information 358
  • enabling server administrator 303 or in another embodiment enabling user of network 305 to preview said information, test applied settings, virtual objects, geo-fence boundary, schedule, and actions 381, enabling to save as draft or edit already exists or saved information 382, save current information 383 or cancel or discard or remove provided information 384.
  • server module 151 or server administrator In an embodiment enabling server module 151 or server administrator to suggest or provide or display number of points or amount of virtual money for user selection based on object type, location or place, associated type of actions, paid or sponsored or free, type of user who provided information, schedules or duration of publication, geo-fence boundary.
  • server admin can apply or set one or more types of required actions to collect or get one or more virtual objects when user scans real world object e.g. 315 / 325.
  • real world object may comprises but not limited to item, product, showpiece, art, board, design, plate, sculpture, building, home, watch, fountain, neon sign or electric board, flower, tree, furniture, interior, instrument, image or drawing frame and type of scene in real world.
  • enabling or authorizing server administrator 330 or in another embodiment enabling user of network 305 to adds 330 information about one or more objects of real world provide one or more types of settings, preferences, object criteria, virtual objects, schedules, required actions for users or players of network to collect said virtual objects when user scans said real world object or capture photo of said real world object.
  • server module 151 receives said information and stores to server database 115 and verifies object photos, videos, object name, object address, object details, object location including place, geo-fence boundary, object keywords, object criteria including object models and images, virtual objects, associated virtual money, text, data and metadata, applied settings, schedules, one or more required actions.
  • server administrator makes available said information on map or other one or more types of Graphical User Interface (GUIs) for users of network (Discuss in detail in Figure 8-9 and figure 8-14 discuss about how user scans pre-defmed or identified real world objects and selects, captures, wins, take one or more types of actions and gets scanned real world object associated one or more virtual objects and associated virtual money).
  • GUIs Graphical User Interface
  • 3 rd parties developers can design virtual objects and upload to server with details for verification and in the event of successful verification, server makes said virtual objects available for users of network as free or payment based or sponsored, define real world objects and associate virtual objects including virtual characters, virtual powers, virtual money, virtual rewards, develop one or more types of mini game and register, uploads to server 110 with details for verification and making said one or more types of one or more mini games available, searchable for other users of network’s selection 377 or use by developer for associated with developer defined real world objects (e.g. 315 / 325).
  • some types of or location or place specific or named real world objects available for scan for premium e.g. 315 / 325.
  • Figures 4-5 illustrates user interface(s) for in an embodiment enabling sponsor or advertiser or publisher user to create account including provide user and entity details 401 (name, age, gender, etc.
  • server or system verifies sponsor or advertiser or publisher or user or account(s) or type of account and associated roles, rules, privacy, rights & privileges and policies and active user account to enable account holder to create and manage one or more advertisement campaigns, advertisement groups, advertisements and associate virtual objects, object criteria, object details including object photos or videos, target criteria, geo-fence and other settings.
  • campaign or publication comprises a set of advertisement groups (virtual objects, object details, advertisements, object criteria) that share a budget, advertisement model type, location targeting, type of user profile or defined characteristics of user targeting, schedules of targeting, languages targeting, device(s) type(s) targeting, campaign types (discussed in detail in figure 5) and other settings, campaign settings let advertiser control where and when their advertisements (virtual objects) appear and how much they want to spend and campaigns are often used to organize categories of products or services that advertiser offer, Advertiser enable to provide campaign or publication name 404, provide icon or logo or image 407, provide details 406, search 411 or select 512 location or place of business on map or directly provide or input or select location or place of business 438 including Global Positioning System (GPS) coordinates, longitude, latitude, add photo of object or product or service or brand (for example capture or record or select 418 / 419 and add photo or video
  • GPS Global Positioning System
  • GPS Global System
  • object details 440 provide or select object related one or more object keywords 444, provide or upload or add 445 or design or draw or edit
  • enabling sponsor or advertiser or user 405 to define or set whether users or players of network needs to Augmented Reality Scanning of real world object 415 / 425 or capture photo real world object 415 / 425 and send or use camera display to view or scan real world object 415 / 425 and provide raw photo real world object 415 / 425 to server module 151 of server 110 for processing, recognizing, validating, identifying and displaying associated one or more virtual objects 466 and associated virtual money 445 and other one or more types of data and metadata or enabling to define or set whether display virtual object(s) and associated virtual money anywhere within predefined geo-fence boundary 515, so when user or player enters in to said defines geo-fence boundary 515, then without scanning said real world object 415 or 425, displaying or presenting virtual objects 466 and/or associated virtual money 445 to said entered user or player and enable said user or player to get said displayed virtual objects 466 and/or associated virtual money 445 or get said virtual objects 466 and/or associated virtual money 445 by conducting
  • real world object e.g. particular food item or real world object e.g. food item with displayed virtual object
  • 391 including feedback, comments, user’s one or more types of profile or details, survey form, need to provide contact details, need to refer 392 product or service of sponsor’s business to particular number of friends or contacts of user, need to share 392 details or photo or video of product or service of sponsor’s business, invite friends 392 to visit place of sponsor’s business, register 393 with web site of sponsor’s business, install application 395 of sponsor’s business, provide comments or feedback or reviews 396 of products or services of sponsor’s business, need to take one or more types of reactions including like, dislike, provide one or more types of emoticons 397, need to view particular duration of presentation of products and services or business of sponsor 399, follow 321 business of sponsor including shop, company, product, service, need to add to favorite or contact list 322 or conduct one or more types of actions as per defined rules 376 or to-do as per defined one or more types of rules 376.
  • user of network who scanned said real world object 415 / 425 will automatically get associated virtual object 466 and/or associated virtual money 464.
  • user of network who scanned said real world object 415 / 425 will need to play said set or selected mini game to get or collect or capture said real world object 415 / 425 associated virtual object 466 and/or associated virtual money 445.
  • EXIF Exchangeable image file format
  • geo-fence boundary 515 surround real world object e.g. 415 / 425 or place of business 516.
  • advertisement 505 daily budget is the amount that advertiser set for each campaign to indicate how much, on average, advertiser’s willing to spend per day, advertisement model including pay per augmented reality scanning 506 or capturing of photo of real world objects by users or customers or visitors of business place for getting associated virtual objects, wherein virtual objects associated with real world objects defined and provided by provided by sponsor or advertiser or user 405.
  • advertiser or sponsor or user 405 can search and select one or more target real world objects 501 each associated with particular location or place or one or more types of target real world objects 502 scattered at different locations or search and select one or more movable target real world objects 503 (e.g. elephant at Yellowstone national park) or natural scene each associated with particular location or one or more types of movable target real world objects 504 (e.g. animal) or natural scene scattered at different locations or search and select one or more geo-fence boundaries 522 or search and select one or more types of geo-fence boundaries 523 for displaying virtual objects (e.g. 466) related to advertisement when users scans or takes picture of said selected real world objects.
  • Advertiser can provides associated target criteria including add, include or exclude or filter one or more languages 509, schedule of showing of advertisement including start date 510, end date
  • target user s profile type or characteristics or modeling of target users including any users of network or target criteria specific users of network including one or more types of one or more profile fields including gender, age or age range, education, qualification, home or work locations, related entities including organization or school or college or company name(s) and Boolean operators and any combination thereof 507.
  • user or publisher or advertiser can save campaign 595 at server database 115 of server
  • server module 151 and/or local storage medium of user device 200, so user can access, update, start 585, pause 586, stop or remove or cancel 584, view and manage 590 one or more created campaigns and associate information and settings including one or more advertisement groups 592, and advertisements 582 and can access started one or more campaigns,
  • advertisement groups and advertisement associated or generated analytics and statistics 593 are advertisement groups and advertisement associated or generated analytics and statistics 593.
  • One or more object criteria including object model 446 / 448 / 444 that can trigger or displays virtual objects 466 when someone i.e. any users of network scans or view (via eye glass or spectacles equipped with video camera and connected with user device) similar to said supplied image 9250 (e.g. user [A] visits shop of New York City“Domino’s Pizza” shop 410 / 516 and scans or view“Pizza” 415 / 425 via user device camera or via eyeglass or digital spectacles which contains said object criteria (system matched and recognizes said scanned or viewed image with object criteria or object models associated with advertisements and identifies advertisements i.e. keywords presented to said scanner or viewer user).
  • object criteria e.e. any users of network scans or view (via eye glass or spectacles equipped with video camera and connected with user device) similar to said supplied image 9250 (e.g. user [A] visits shop of New York City“Domino’s Pizza” shop 410 / 516 and scans or view“Pizza” 415
  • user 405 can make order and payment 597 and submit said provided information to server module 151 for processing, moderation, verification, validation, applying needed settings and after successfully validation and verification making them available for other users of network.
  • server module 151 received said information from user and enables sever administrator to review said information and after successfully reviewing, moderation, verification, validation, applying needed settings, server administrator mark said information as verified information (displaying verified icon or badge e.g. 413).
  • server module 151 receives said information and stores to server database 115 and verifies object photos, videos, object name, object address, object details, object location including place, geo-fence boundary, object keywords, object criteria including object models and images, virtual objects, associated virtual money, text, data and metadata, applied settings, schedules, one or more required actions.
  • server administrator makes available said information on map or other one or more types of Graphical User Interface (GUIs) for users of network (Discuss in detail in Figure 8-9 and figure 8-14 discuss about how user scans pre-defmed or identified real world objects and selects, captures, wins, take one or more types of actions and gets scanned real world object associated one or more virtual objects and associated virtual money).
  • GUIs Graphical User Interface
  • advertiser or sponsor or user 405 can create new 588 or save 594 or manage 590 one or more advertisement campaigns and can add new advertisement group 591 or manage existing advertisement groups 592.
  • advertiser or sponsor or user 405 can create new advertisement (publish or displaying virtual object to users when user or player or customer or prospective customer visits place of advertiser and conduct augmented reality scanning of advertiser defined of real world object or take photo of real world object provided or defined by advertiser e.g. particular food item or enter into advertiser defined one or more geo fence boundaries.
  • advertiser or sponsor or user 405 can save or update 583 or remove 584 or manage 582 created or drafted or published or started advertisement(s).
  • advertiser or sponsor or user 405 can starts 585 or pause 586 already verified advertisements.
  • advertiser or sponsor or user 405 can schedule publishing of advertisement 587.
  • advertiser or sponsor or user 405 can view advertisement campaign, advertisement groups and advertisements related statistics and analytics including number of user viewed details about said real world object e.g. 425, number of users scanned and try to capture photo or conduct augmented reality scan of said real world object 425, number of users scanned or capture photo or conduct augmented reality scan of said real world object 425.
  • Figures 6 illustrates user interface(s) for server admin 605 to define generalized named or type of objects in real world which tied or not tied to particular location including named or type of animals like elephant, and horse, birds like peacock , sea creatures like fish, flowers like rose, mountains including any mountain or mountain at particular location or place or geo-fence boundary, tree including palm, building, temple, museum, library, art gallery, patrol pump, road, river, pond, wall, pool, island, water or any type of infrastructure including any building or building at particular location or place or geo-fence boundary, generalized or unbranded or objects or object types in real world that not yet defined by server or users of network including other than defined objects (e.g.
  • watch at times square is defined but watch at particular shop not yet defined) like watch, mobile, moving objects including birds, animals, natural scene including sunrise, sunset, rainbow, rain, particular brand car moving on any road, flying airplane, bus, train, particular scene or image in video or movie, particular song or line of song, particular type of music, music played by particular instrument, generalized type of activity or action including particular type of dance, music, singing, sports (cricket, carom, soccer, badminton), running, walking, talking, viewing, expressions (smile, acting, swag), style (hair, face, cloth, make ups), attending event, conducting activity with one or more contacts and conducting one or more types of digital activities or conducting one or more types of digital activities at particular website or application or at particular website or application associated with particular brand, company, named person, shop or entity including online view or visit web sites, view products and services, online purchase, add to cart, take one or more reactions including like, dislike, provide emoticons, comment, refer, share, on one or more types of contents from one or more websites and applications, view video
  • server admin 605 to capture one or more photos or record one or more videos or select and provide one or more photos 618 or videos 619 of object e.g. elephant 615 of real world, provide or select object name 632, object type or category or sub-category or taxonomy 635, define or draw geo-fence boundary 685 (e.g. area of zoo, park, garden, museum, forest, mountain, area), surround normal availability of object e.g. elephant 615 of real world, so users or players of network can physically reach or visit said place of object or object of real world or use map directions & route and step by step or guided directions to physically reach at said real world object’s 615 location or place.
  • object name 632 e.g. area of zoo, park, garden, museum, forest, mountain, area
  • geo-fence boundary 685 e.g. area of zoo, park, garden, museum, forest, mountain, area
  • enabling server admin 605 to provide details or description or structured details 640 of said real world object 625. In an embodiment enabling server admin 605 to select or add one or more new fields 642 and provide each added field specific one or more types of values or data or one or more types of contents or media. In an embodiment enabling server admin 605 to provide or select object related one or more object keywords 644, provide or upload or add 645 or design or draw or edit 649 and provide one or more object criteria including one or more object models or images 646 / 648.
  • server admin 605 to provide, select 667, import, search, purchase, design, edit, update, upgrade, add upload one or more types of one or more virtual objects or virtual elements or virtual characters 666 and provide or select and associate custom or user defined number of or particular amount of or value of virtual money or virtual currency or points or numbers 664 or use pre-set or pre-defmed or pre- associated by server, number of or particular amount of or value of virtual money or virtual currency or points or numbers 664 for particular category or type 635 of real world object 615 or identified or recognized category or type of real world object 615 based on recognizing object in photo or video of real world object 615 and identify associated keywords or categories or types.
  • server admin 605 to define or customize or configure one or more geo-fence boundaries 685 or draw on map (e.g. 512) geo-fence boundaries (e.g. 515) surround real world object 615.
  • map e.g. 512
  • geo-fence boundaries e.g. 515
  • server admin 605 to define or set whether user needs to conduct Augmented Reality Scanning of real world object e.g.“elephant” 615 or capture photo of real world object 615 and send or use camera display to view or scan real world object 615 and provide raw photo of real world object 615 to server module 151 of server 110 for processing, recognizing, validating, identifying and displaying associated one or more virtual objects 666 and associated virtual money 664 and other one or more types of data and metadata or in an embodiment enable or authorizing server administrator 605to define or set whether display virtual object(s) and associated virtual money anywhere 686 or anywhere within predefined geo- fence boundary 685 (e.g. 515), so when user or player enters in to said defines geo-fence boundary (e.g.
  • enabling or authorizing server administrator 605 to hide said details and location about said object of real world 615 for users of network for enabling them to identify or view and collect or get said real world object
  • 6l5associated virtual object 666 based on guess, or display based on luck or random or lottery or based on deciphering clue or tips or hints.
  • server administrator 605 to define or set or apply one or more schedules 601 of availability of said real world object 615 associated virtual objects 666 and/or associated virtual currency 664 including start date and time 685 and end date and time 686 in the event of user scanning (discuss in detail in figure 8-14) said real world object 615.
  • server administrator 605 to apply or select or define one or more types of user actions, activities, fulfill rules, play games or mini games, call-to- actions to make requirement or necessary or mandatory for users of network to conduct or do said defined one or more types of activities, actions, call to actions or fulfill associated rules or play mini game to collect or get displayed virtual objects and associated virtual money in the event of user of network scanned or do augmented reality scanning said real world object 615 or captured phot of said real world object 615, wherein one or more types of user actions and activities comprise need to play pre-set game 677, need to take one or more photos 672 or videos 380 of real world object e.g. particular food item or real world object e.g.
  • user of network who scanned said real world object 615 will automatically get associated virtual object 666 and/or associated virtual money 664.
  • user of network who scanned said real world object In an embodiment in the event of selection of play mini games option and selection of type of game 677, user of network who scanned said real world object
  • server administrator 605 to take photo of user with real world object 615 or real world object 615 and submit to sever module 151 of server 110 and in the event of matching submitted photo associated Exchangeable image file format (EXIF) data associated captured photo date and time plus additional duration for submitting photo (which must be within provided or pre-set maximum duration to submit captured photo) with server’s current date and time, provided or add or store real world object 6l5associated virtual object 666 and/or associated virtual money 664 to user’s collection of virtual objects and virtual money or virtual currency or user’ s account.
  • EXIF Exchangeable image file format
  • server administrator 605 review said information and after successfully reviewing, moderation, verification, validation, applying needed settings, server administrator mark said information as verified information 658 (displaying verified icon or badge e.g. 312) and make available for users of network.
  • server administrator 605 to preview said information, test applied settings, virtual objects, geo-fence boundary, schedule, and actions 681, enabling to save as draft or edit already exists or saved information 682, save current information 683 or cancel or discard or remove provided information 684.
  • server module 151 or server administrator 605 enabling server module 151 or server administrator 605 to suggest or provide or display number of points or amount of virtual money for user selection based on object type, location or place, associated type of actions, paid or sponsored or free, type of user who provided information, schedules or duration of publication, geo-fence boundary.
  • GUIs Graphical User Interface
  • server administrator After successfully verification, server administrator makes available said information on map or other one or more types of Graphical User Interface (GUIs) for users of network (Discuss in detail in Figure 8-9 and figure 8-14 discuss about how user scans pre-defmed or identified real world objects and selects, captures, wins, take one or more types of actions and gets scanned real world object associated one or more virtual objects and associated virtual money).
  • GUIs Graphical User Interface
  • enabling server administrator 605 to adds or defines 630 another object of real world.
  • enabling server administrator 605 to provide or define or configure one or more types of attributes 643 of virtual object 666 including add type of attributes 690 for using in playing of one or more types of games, wherein attributes may comprise Attack Power (AP) (Physical Strength) 629, Mind Power (MP) 631, life 633, Virtual Money Value (VMV) 627, get maximum number of said virtual objects 666 per day or within particular duration 628, set how to increase power of said particular virtual object by conducting one or more tasks, activities, actions, using of one or more types of virtual objects and number of or amount of virtual money or virtual currency or points 636.
  • AP Attack Power
  • MP Mind Power
  • VMV Virtual Money Value
  • elephant virtual object 666 needs daily 5 trees to maintain and each additional tree increases particular number of physical power and/or mind power.
  • server administrator 605 After defining or configuring one or more types of attributes, enabling server administrator 605 to save 639 or edit 638 or remove or cancel or discard 642 defined or configured attributes and settings.
  • Figure 7 shows a flow diagram illustrating one example embodiment of a method 700 of the virtual object publication module 151.
  • the virtual object publication module 151 receives uploaded virtual object data and uploaded object criteria from a first client device.
  • operation 710 may be implemented with the virtual object data upload module, the object criteria upload module.
  • the virtual object engine 153 generates a virtual object based on the uploaded virtual object data, and at operation 718, is associated with the uploaded object criteria.
  • check is made whether scanned or captured photo of object in real world associated location sufficiently matches with said identified location of second client device at the time of augmented reality scanning or capturing or taking of photograph of object or not. If location associated with object of real world which user scanned or take photo sufficiently matches with said identified location of second client device then at operation 740 identify the date & time of augmented reality scanning or capturing or taking of photograph. At operation 745 check is made whether scanned data (raw photo) or captured photo associated exchangeable image file format (EXIF) Data associated date & time sufficiently or as per rule or settings matches with server’s current date and time or not.
  • scanned data raw photo
  • EXIF captured photo associated exchangeable image file format
  • the virtual object engine 153 determines that a photograph has been taken by a second client device and visually searches the photograph in order to perform object recognition on the photograph.
  • the virtual object engine 153 determines whether a recognized object in the photograph satisfies the uploaded object criteria associated with particular location or place or Global Positioning System (GPS) coordinates, longitude, latitude or address or geo-fence boundary or range of location from the first client device. If not, then the method 700 returns to operation 720 in order to search for more objects in the photograph. If the recognized object does satisfy the uploaded object criteria from the first client device then, at operation 750, displaying, by the server module 153, recognized real world object at particular identified location associated virtual object to the second client device.
  • GPS Global Positioning System
  • Figure 8 (A) illustrates Graphical User Interface (GUI) of digital interactive map 810 or virtual world user interface 810, wherein user or player 805 can view avatar of player 809 at current location 801 of user device 200 based on monitored and tracked current location of user device 200 by server 110.
  • GUI Graphical User Interface
  • User can search 804 locations, places, pre-defmed objects of real world (discussed in detail in figures 3-6) and can search 804 location or place or pre-defmed geo-fence associated pre-defmed objects (discussed in detail in figures 3-6) in real world based on one or more search keywords, boolean operators, criteria, filters and advance search options and any combination thereof.
  • User can view search query specific 804 or user’s current location 801 /
  • current location 801 / 809 surround pre-defmed real world objects e.g. 808 (discussed in detail in figures 3-6) on real world map 810 or virtual world user interface 810.
  • User can view details 820 about pre-defmed real world objects (discussed in detail in figures 3- 6) related to particular place 801 on map 810 or virtual world user interface 810, wherein details 820 about said real world object may comprises object name 832, object details verification badge or certification 833, object type or category 835, object place or location or physical address 836, object details 840, and user name 843 who provide details about or defined object and associated statistics and analytics including number of user viewed details about said real world object 808, number of users scanned or capture photo or conduct augmented reality scan of said real world object 808.
  • enabling user or player 805 to report 841 details of real world object 808 as spam or inappropriate or full or partially incorrect.
  • enabling user or player 805 to search select, navigate on map and view one or more real world objects related to one or more locations or places or geo-fence boundaries from map interface or virtual world interface.
  • enabling user or player 805 to find directions and route 827 use step by step guided direction 828 to reach at particular real world object or particular place related to one or more real world objects and view distance from current location, estimated or approximate duration or time to reach there.
  • Figure 8 (B) illustrates Graphical User Interface (GUI) 271 for enabling user to augmented reality scanning 874 or scanning the real world object 855 with the camera on the mobile phone 200 using the mobile phone application 271 or illustrates Graphical User Interface (GUI) 272 for taking photograph 872 of the real world object 855 with the camera on the mobile phone 200 using the mobile phone application 271.
  • GUI Graphical User Interface
  • Figure 8 (B) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has conduct an augmented reality scanning 874 or taken a photograph 855 at particular location 338 / 336 / 310 at particular date & time (discuss in detail in figure 7) that includes the recognized object 890 or 891 or 892 that corresponds sufficiently to specified object 346 or 348 or 349 and therefore satisfies the object criteria.
  • the server module 153 of the virtual object application 136 retrieves the associated virtual object 366 corresponding to the satisfied object criteria 346 or 348 or 349 associated with pre-defmed real world object 325 at particular location or place 338 / 336 / 310 and displays the virtual object 851 and associated virtual money 852 to the mobile device 200.
  • the virtual object 851 and associated virtual money 852 may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • user or player 805 need to conduct one or more required activities or actions or call-to-actions or participations or transactions or play mini games to get, collect, acquire, store, win or select said displayed virtual object 851 and associated virtual money 852.
  • user or player 805 needs to visit virtual object associated place as quickly as possible before other players reach there and get or collect virtual object 851 and associated virtual money 852.
  • Rules may comprise limited number of virtual objects within particular period, need particular level to collect particular type of virtual objects, need particular number of contacts to get virtual objects, need particular number of (e.g.
  • [Y] may as per rules plays particular type of disabled mini game or selects or automatically gets one or more displayed virtual object(s).
  • user need to walk and reach at place of real world object associated said virtual object 851 and associated virtual money 852, user must accompanied with at least two contacts or friends, user need to purchase real world object including product or service of said place associated business or sponsor or advertiser or seller and submit digital or scanned receipt to server module 153.
  • Figure 9 (A) illustrates Graphical User Interface (GUI) of digital interactive map 910 or virtual world user interface 910, wherein user or player 905 can view avatar of player 909 at current location 901 of user device 200 based on monitored and tracked current location of user device 200 by server 110.
  • GUI Graphical User Interface
  • User can search 904 locations, places, pre-defmed objects of real world (discussed in detail in figures 3-6) and can search 904 location or place or pre-defmed geo-fence associated pre-defmed objects (discussed in detail in figures 3-6) in real world based on one or more search keywords, boolean operators, criteria, filters and advance search options and any combination thereof.
  • User can view search query specific 904 or user’s current location 901 /
  • current location 901 / 909 surround pre-defmed real world objects e.g. 908 (discussed in detail in figures 3-6) on real world map 910 or virtual world user interface 910.
  • User can view details 920 about pre-defmed real world objects (discussed in detail in figures 3- 6) related to particular place 901 on map 910 or virtual world user interface 910, wherein details 920 about said real world object may comprises object name 932, object details verification badge or certification 933, object type or category 935, object place or location or physical address 936, object details 940, and user name 943 who provide details about or defined object and associated statistics and analytics 944 including number of user viewed details about said real world object 908, number of users scanned or capture photo or conduct augmented reality scan of said real world object 908.
  • enabling user or player 905 to report 941 details of real world object 908 as spam or inappropriate or full or partially incorrect.
  • enabling user or player 905 to search select, navigate on map and view one or more real world objects related to one or more locations or places or geo-fence boundaries from map interface or virtual world interface.
  • enabling user or player 905 to find directions and route 927 use step by step guided direction 928 to reach at particular real world object or particular place related to one or more real world objects and view distance from current location, estimated or approximate duration or time to reach there.
  • Figure 8 (B) illustrates Graphical User Interface (GUI) 271 for enabling user to scanning the real world object 965 with the camera on the mobile phone 200 using the mobile phone application 271 or illustrates Graphical User Interface (GUI) 272 for taking photograph of the real world object 965 with the camera on the mobile phone 200 using the mobile phone application 271.
  • GUI Graphical User Interface
  • Figure 9 (B) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has taken a photograph 965 at particular location at particular date & time (discuss in detail in figure 7) that includes the recognized object 990 that corresponds sufficiently to specified object 446 or 448 or 449 and therefore satisfies the object criteria.
  • the server module 153 of the virtual object application 136 retrieves the associated virtual object 466 corresponding to the satisfied object criteria 446 or 448 or 449 and displays 951 the virtual object 466 to the mobile device 200.
  • the virtual object 951 may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • Figure 9 (C) illustrates an example of a publication of a virtual object.
  • user or player 905 need to conduct one or more required activities or actions or call-to-actions or participations or transactions or play mini games to get, collect, acquire, store, win or select said displayed virtual object 951 and associated virtual money 952.
  • user needs to submit digital or scanned receipt to server for getting or collecting or acquiring or storing or adding to user’s collection of virtual objects or wining or selecting said displayed virtual object 951 and associated virtual money 952.
  • server module 153 receiving from the user 905
  • EXIF Exchangeable image file format
  • Server module 153 also identifies monitored or tracked current location or place of user’s device 200 or identifies entered and stays in geo-fence boundary at the time of sending of scanned or photo of receipt 985, identifying or recognizing unique business name 975, place or location 976, date & time of purchase 977, amount of purchase, quantity names and details 978 of one or more products or services 425 or 908 or 965 from received scanned or photo of receipt 985 based on object recognition and Optical Character Recognition (OCR) technologies.
  • OCR Optical Character Recognition
  • Server module 153 validates business based on recognizing business name and location from received scanned or photo of receipt 985 based on object recognition and Optical Character Recognition (OCR) technologies and match said recognized business location or place 515 / 516 / 410 / 436 / 438 with user device’s 200 current location who uploaded or submitted said scanned or photo of receipt 985.
  • OCR Optical Character Recognition
  • server module 153 After successfully validating originality and details of purchased products, server module 153 displays purchased or scanned product 965 associated virtual objects 951 and associated virtual money 952.
  • server module 153 responsive to the client device 200 being within a set distance 520 of the location 515 / 516 / 410 / 436 / 438 of the particular business in the real world, providing, by the server module 153, said location or place 515 / 516 / 410 / 436 / 438 associated business 410 associated one or more types of virtual objects or virtual elements or a reward 466 / 445 or 951 / 952 to the user or player 905 / 909.
  • a virtual object publication application 151 receives virtual object data including virtual object image, photo, video, 2D or 3D image, animation, one or more types of multimedia file, brand name, logo 366 / 466, associated particular number or amount of virtual money or virtual currency or points 364 / 445, and object criteria 346/348 or 446/448 and location information 336 / 338 / 373 or 436 / 438 / 515 of associated real world object 315 / 325 / 415 /
  • the virtual object 366 / 466 is associated with satisfaction of the object criteria
  • a virtual object engine 153 identifies that a client device has conducted augmented reality scanning 855 / 965 or has taken a photograph 855 / 965 from particular location 801 / 901 of real world object 808 / 908 or within pre-set range of location
  • the virtual object engine 153 then provides the virtual object 851 / 951 to the client device 200 based on the photograph or scanned data or raw photo
  • the virtual object 851/951 may then be displayed on a Graphical User Interface (GUI) of the client device 200.
  • GUI Graphical User Interface
  • 346/348 or 446/448 may include associations between an object 808/908 and a source of image data 855/965, for example, a name or a logo or a brand in which case the associated virtual object may include images associated with the product, service, brand, logo, and company of the sponsor.
  • real world object 855 For example, if the user scans real world object 855 or takes a photograph of real world object 855 and an object in the photograph 855 or scanned data 855 or augmented reality scan associated raw photo 855 is recognized as the pre-defmed or pre-configured real world object 325 i.e. Hard Rock Building, virtual objects 366 associated with the said real world object325 i.e. Hard Rock Building may be provided or displayed 851 to the user on or with the photograph 855.
  • real world object 325 i.e. Hard Rock Building
  • Third party or external entities including advertisers, sellers, sponsors, vendors, shops, users may, in one example embodiment, create virtual objects 466 / 445for displaying for user when user scans or augmented reality scanning real world object 425 then based on recognition of an object 965 satisfying criteria 446 / 448 specified by the creator or provider 405 of the virtual object(s) 466 / 445.
  • a photograph 965 including an object 425 recognized as a pizza may result in the user being presented with virtual object 466 / 445 that overly on or over to the photograph 965 or a photograph 965 including an object recognized as a food type may result in the user being presented with generalized virtual object provided or defined by server administrator (discuss in detail in figure 6).
  • Third party entities may also bid (or otherwise purchase opportunities) to have a virtual object(s) included in a set presented to a user for augmentation of a particular photograph.
  • the virtual object 466 / 951 may also be constrained by a geo-fence (e.g., geographic boundary) 515 around the availability of real world object 516.
  • the virtual object application 151 connected or communicated with a virtual object engine 153 that determines that a mobile device 200 has taken a photograph 855 / 965 and, based on the photograph 855 / 965 including an object that satisfies the object criteria 346/348 or 446/448, provides the virtual object 366/466 to the client device 200.
  • the virtual object engine 153 includes an object recognition module configured to find and identify objects in the photograph 855 / 965; and compare each object against the object criteria 346/348 or 446/448.
  • the object criteria 346/348 or 446/448 may include associations between an object (e.g. 856 / 857) and a source of image data 855 / 965.
  • the virtual object publication application uses the virtual object publication module 151 to provide a Graphical User Interface (GUI) 270 ( Figure 3-5) for a user 303 /305 / 405 to upload virtual object data 332 / 335 / 336 / 338 / 340 / 366 / 364 or 432 / 435 / 436 / 438 / 440 / 466 /
  • GUI Graphical User Interface
  • the user may upload an image 366 or 466 for the creation of a virtual object and specify criteria 346 / 348 / 344 or
  • the virtual object publication module 151 generates a virtual object 851 / 951 that includes the image 366 / 466 and is associated with satisfaction of the specified object criteria 346 / 348 / 344 or 446 / 448 / 444.
  • mobile devices that have taken a photograph 855 / 965 including a recognized object that satisfies the specified object criteria 346 / 348 / 344 or 446 / 448 / 444 may have access to the virtual object 366 / 466.
  • the virtual object may include audio and visual content and visual effects. Examples of audio and visual content include pictures, texts, logos, animations, multimedia and sound effects.
  • Figure 1 shows a block diagram illustrating one example embodiment of the virtual object application 136.
  • the virtual object application 136 includes a virtual object publication module
  • the virtual object publication module 151 provides a platform for publication of virtual objects.
  • the virtual object publication module 151 enables users of client devices (either mobile or web clients) 200 to upload virtual object data for generating a virtual object and object criteria for comparing against recognized objects in a photograph.
  • the virtual object engine 153 identifies that a client device (e.g., 200) via client application 270 ( Figures 8 (B) and 9 (B)) has augmented reality scanning 271 or has taken a photograph 272 and visually searches the photograph in order to recognize objects in the photograph or scanned data or raw photo 855 / 965.
  • the virtual object engine 153 includes an object recognition, face or body recognition, voice recognition and optical character recognition module.
  • the virtual object(s) provided to a client device 200 by the virtual object engine 153 may be based on object criteria 346 / 348 / 344 or 446 / 448 / 444 determined to be satisfied, via object recognition, face or body recognition, voice recognition and optical character recognition module, by a recognized object in the photograph 855 / 965 and identifying location of user who scanned real world object or capture a photo of real world object based on monitored or tracked current location of client device 200 (figure 8(B) or 9(B)) of user 805 or 905 and matching said identified location with said captured or scanned real world object associated location 338 / 336 or 438 / 436.
  • the object recognition module of the virtual object engine 153 first visually searches the photograph in order to find and identify objects in the photograph 855 / 965. This may be accomplished via employing already exists object recognition technologies such as SentiSight SDK TM, Viola Jones Object Detection Framework TM, YOLO TM, Clarifai TM, Edge matching, Divide-and-Conquer search, Greyscale matching, Histograms of receptive field responses, Large modelbases, gradient matching, etc.
  • the object recognition module of the virtual object engine 153 then compares each recognized object against the specified object criteria to determine if object criteria associated with a particular virtual object have been satisfied and, if so, providing said virtual object to the client device 200.
  • the object criteria may include associations between an object and a source of image data.
  • Figures 3-6 illustrates an example of a GUI for uploading virtual object data and for uploading object criteria with respect to recognized objects in a photograph.
  • the GUI displays an upload object models 346 / 348 / or 446 / 448, an upload image files 366 / 466, an object criteria 344 or
  • the upload image files 366 / 466 enables a user to upload image files, (e.g., a photograph, a graphic, an animation, a multimedia or a video or 2D or 3D image or a music or voice any combination thereof) to the virtual object data upload module of server module 151.
  • the object criteria 346 / 348 / 344 or 446 / 448 or 444 enables the user to upload object criteria by inputting specific requirements that must be satisfied by a recognized object in the photograph.
  • the user can upload a specific object model to be used by the virtual object engine 153 to compare to objects recognized in a photograph.
  • the object criteria are submitted to server module 151.
  • the virtual object data enables the user to submit one or more types of data and metadata, for example a virtual object associated virtual money or virtual currency or text for the virtual object.
  • the user may submit the image files, virtual object data and object criteria by clicking on the submit/verify button 388 / 589.
  • the publication engine 153 generates and displays a virtual object based on the image files, virtual object data and object criteria.
  • the positions of players can be monitored or tracked using, for instance, a positioning system (e.g. a Global Positioning System (GPS) system) associated with a player's mobile device.
  • a positioning system e.g. a Global Positioning System (GPS) system
  • Player position information can be provided to a game server and can be used by the game server to update player locations in the real world map or virtual world.
  • GPS Global Positioning System
  • the player also continuously moves about in a range of coordinates in the real world map or virtual world. So it enables virtual world geography that correspondences the real world geography.
  • players can view on real world map pre-defmed real world objects and associated information (discussed in details in figures 3-6) and can reach there physically or use map’s directions, step by step guided direction and route to reach there and conduct augmented reality scanning of real world object or take photo of real world object and in the event of receiving, processing, recognizing, identifying location and date & time of scanning or taking of photo, validating scanned data or raw photo or captured photo of real world object, server application 136 displays virtual objects.
  • players can view on real world map pre- defmed real world geo-fence ) and can reach there physically or use map’s directions, step by step guided direction and route to reach there and server module 136 randomly displays virtual objects based on one or more types of factors, rules, settings, preferences and user’s data, which user can view on real world map within defined geo-fence boundary and can get, win, battel to get, capture, select, play mini game and get, conduct one or more types of rule specific tasks, activities, actions, call-to-actions, participations, transactions and get.
  • server module 136 stores or adds said virtual objects to user’s collections of virtual objects or associate with user’s account.
  • the real world objects or elements or items or scene can associate with virtual objects or elements or items or money or currencies by defining real world object (discussed in detail in figure 3-6), so player can reach said defined real world object associate place in real world and can scan or collect associated virtual objects.
  • Linking real world objects or elements with virtual elements in the virtual environment creates interests among players to conduct one or more types of physical world activities, actions, participations, transactions and communications. For instance, as players navigate geographic coordinates in the real world, the players can discover and interact with or decipher real world objects and in the event of augmented reality scanning or taking of photo, player is presented with virtual objects or virtual elements or virtual money or player is presented with virtual objects or virtual elements or virtual money surprisingly or anywhere within pre-defmed geo-fence boundary.
  • one or more types of computing device including smart mobile phone may obtain an image of the object.
  • the image may be obtained by scanning the object, taking one or more pictures of the object, etc.
  • the user may use a camera to augmented reality scan or scan or take a picture, a photograph of the object and may send the image of the object to the server and/or may store the image of the object locally (e.g., in a local database).
  • Figures 10-14 illustrates various examples according to an exemplary embodiment of the present disclosure.
  • Figure 10 (A) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has conduct an augmented reality scanning 1024 or taken a photograph 1025 originally at particular date & time that includes the recognized object 1005 that corresponds sufficiently to specified object 1007 and therefore satisfies the object criteria.
  • the server module 153 of the virtual object application 136 retrieves the associated virtual object 1001 corresponding to the satisfied object criteria 1007 (i.e. related to particular general food item or menu item which may available in many restaurants and not tied or related with or defined by particular advertiser or brand or place or seller or restaurant) and displays the virtual object 1001 and associated virtual money 1002 to the mobile device 200.
  • the satisfied object criteria 1007 i.e. related to particular general food item or menu item which may available in many restaurants and not tied or related with or defined by particular advertiser or brand or place or seller or restaurant
  • the virtual object 100 land associated virtual money 1002 may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • user or player 1012 needs to conduct one or more required activities or actions or call-to-actions or participations or transactions or play mini games to get, collect, acquire, store, win or select said displayed virtual object 851 and associated virtual money 852.
  • user require to play mini game 1022 which pre-defme or pre-associated with said the virtual object 1001 and associated virtual money 1002.
  • user is presented with movable mouth with character 1004.
  • User can drag, move said displayed movable mouth with character 1004 anywhere on photo and in the event of move on photo 1025, part of photo removed creates illusion like user eating food item depicted in captured photo of real world food item and in the event of completely or sufficiently removing food item depicted in captured photo of real world food item, user may get the virtual object 100 land associated virtual money 1002 and the virtual object 100 land associated virtual money 1002 may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • server module 153 may automatically capture, receives, processes, automatically associate one or more types of data and metadata including user identity, recognized object associated identified keywords, categories and said identified keywords, categories associated information, associated virtual object(s) and/or particular amount or number of virtual money, digital receipt if submitted by user and associated recognized information based on object or optical character recognition, date & time of capturing and receiving photo, identified location or place while capturing of photo based on monitored and tracked current location of user device while capturing of photo and stores said photo and associated details for alter retrieval and presentation.
  • Figure 10 (B) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has taken a photograph 1035 at particular location at particular date & time (discuss in detail in figure 7) that includes the recognized objectl035 that corresponds sufficiently to specified object 1035 and therefore satisfies the object criteria.
  • the server module 153 of the virtual object application 136 retrieves the associated virtual object 10331 corresponding to the satisfied object criteria 1035 and displays the virtual object 1031 to the mobile device 200.
  • the virtual object 1031 may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • Figure 10 (C) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has conduct an augmented reality scanning 1074 on object 1055 or taken a photograph 1055 originally at particular date & time that includes the one or more recognized objects 1061 / 1062, wherein recognized object 1061 that corresponds sufficiently to specified object 1065 and recognized object 1063 that corresponds sufficiently to specified object 1064 and therefore satisfies the object criteria.
  • the server module 153 of the virtual object application 136 retrieves the associated virtual object 1051 corresponding to the satisfied object criteria 1065 / 1064 and displays the virtual object 1051 and associated virtual money 1052 to the mobile device 200.
  • the virtual object 105 land associated virtual money 1052 may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • system based on recognition of face of artist, detection of song and/or music based on voice recognition, recognition of music instruments based on object recognition and identification of type of place where user originally captured photo or conducted augmented reality scanning and any combination thereof, system identifies that user is conducting particular type of physical world activity including viewing of orchestra or music or music or singing show at particular identified or relevant place (e.g. hotel, club, restaurant, event, show, drama, orchestra, play, party) and providing said recognized or identified or guessed or determined activity type associated one or more types of virtual objects and/or virtual money.
  • identified or relevant place e.g. hotel, club, restaurant, event, show, drama, orchestra, play, party
  • user or player 1062 needs to conduct one or more required activities or actions or call-to-actions or participations or transactions or play mini games to get, collect, acquire, store, win or select said displayed virtual object 1051 and associated virtual money 1052.
  • server module 153 monitors and tracks and recognizes one or more types of user’s activities and actions including monitoring and tracking walking of particular number of steps or miles or kilometers by user based on user device associated sensors, plying sports, singing song, playing music by using music instruments at particular type of location or place (class, club, event, show, theatre) for particular duration, identify that user check in at particular named or type of place (movie or drama theater, event, restaurant, hotel, club, class, mall, shop) or detect traveling via cruise, cab, bus, train, flight based on speed change of user’s location within particular duration and stay for particular duration based on monitored and tracked current location or place of user device and accessing place associated information, identify user’s participations at particular named or type of event based on monitored and tracked current location or place of user device and accessing event associated information, detecting of conducting of transactions by user based on linking with seller’s system or database, identifying user provided status, identification of festival, user’s birthday, anniversary, party, event at particular date and displays activity type and/or user data
  • Figure 11 (A) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has conduct an augmented reality scanning 1124 of the real world object e.g. logo of particular brand 1105 or taken a photograph 1111 of the real world object e.g. logo of particular brand 1105 originally at particular date & time that includes the recognized object 1113 that corresponds sufficiently to any specified object 1114 within set of object criteria and therefore satisfies the object criteria.
  • the server module 153 of the virtual object application 136 retrieves the associated virtual object 1101 corresponding to the satisfied object criteria 1116 and displays the virtual object 1101 and associated virtual money 1102 and associated one or more types of offer e.g. coupon 1103 to the mobile device 200.
  • the virtual object 1 lOland associated virtual money 1102 and associated one or more types of offer e.g. coupon 1103 may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • user or player 1112 needs to conduct one or more required activities or actions or call-to-actions or
  • user can redeem said coupon 1103 in real world.
  • user need to scan at particular place of business to get particular type of virtual object and/or offers.
  • user can scan brand name, brand logo from any real world object including item, product, electrified board, board, receipt to get particular type of virtual object and/or offers, wherein based on object keyword 1115 system recognizes name of brand based on Optical Character Recognition (OCR) to identify object keyword 1115 and identify associated particular type of virtual object(s) 1102/1103 and/or offers 1103.
  • OCR Optical Character Recognition
  • server 110 stores user specific virtual objects or enable particular identified sponsor or advertiser or user to provide specific or different virtual objects including virtual money, virtual reward including coupon, offer, discount, redeemable points, voucher, cashback offer for different set of users of network or to selected or provided particular one or more identified users of network or types of users of network.
  • Figure 11 (B) illustrates an example of a publication of a virtual object.
  • server module 153 for getting or collecting or acquiring or storing or adding to user’s collection of virtual objects 1151 or wining or selecting said displayed virtual object 1151 and associated virtual money 1152.
  • user scans receipt 1155 of particular purchased product by tapping or clicking on augmented reality scanning icon or button 1174 or view receipt 1155 of purchased product by employing camera application 272 / 1172 or takes or captures 1172 photo of receipt 1155 purchased of particular product and submits within pre-set duration of purchase of particular product by tapping or clicking on submit icon or button 1171.
  • Server module 153 receiving from the user 1162 scanned or photo of receipt 1155 of the purchased particular product and based on received receipt 1155, server module 153 validates the actual purchase of one or more product or service by user 1162 from said receipt associated identified business or place of business of sponsor or advertiser including unique business name 1154, place or location of business 1158, date & time of purchase 1156, amount of purchase, quantity and names and details 1157 of one or more products or services, wherein identifying Exchangeable image file format (EXIF) data in scanned or photo of receipt 1155 including original date & time of scanned or captured photo of receipt 1155 and match said extracted or identified original date & time with server’s current date & time to validate or check or verify originality of captured photo 1155.
  • EXIF Exchangeable image file format
  • Server module 153 also identifies monitored or tracked current and visited locations or places of user’s device 200 or identifies entered and stays in geo-fence boundary, identifying or recognizing unique business name 1154, place or location 1158, last four digit of debit or credit card, date & time of purchase 1156, amount of purchase, quantity names and details 1157 of one or more products or services from received scanned or photo of receipt 1155 based on object recognition and Optical
  • Server module 153 validates business based on recognizing business name and location from received scanned or photo of receipt 1155 based on object recognition and Optical Character Recognition (OCR) technologies and match said recognized business location or place with logged user visited locations or places or user device’s 200 current location who uploaded or submitted said scanned or photo of receipt 1155.
  • OCR Optical Character Recognition
  • Server module 153 matches scanned receipt associated identified last four digit of debit or credit card based on Optical Character Recognition (OCR) with last four digit of debit or credit card associated with user profile. After successfully validating originality and details of purchased products, server module 153 displays purchased product associated virtual objects including one or more types of offers, voucher, coupon, discount, redeemable points, cash back offer, deal
  • OCR Optical Character Recognition
  • Figure 11 (C) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has conducted augmented reality scanning or taken a photograph 1135 at particular location at particular date & time (discuss in detail in figure 7) that includes the recognized or interpreted object including one or more types of code including barcode or QRcode (Quick Response Code).
  • the server module 153 of the virtual object application 136 interprets associated code and identifies and retrieves associated virtual object 1132 / 1134 and displays said virtual objects 1132 / 1134 to the mobile device 200. In an embodiment user need to select any one of or particular number of virtual objects from displayed virtual objects 1132 / 1134.
  • a barcode is a machine-readable optical label that contains information about the item to which it is attached.
  • a QR code consists of black squares arranged in a square grid on a white background, which can be read by an imaging device such as a camera, and processed using Reed-Solomon error correction until the image can be appropriately interpreted. The required data is then extracted from patterns that are present in both horizontal and vertical components of the image.
  • a smartphone is used as a QR code scanner, displaying the code and converting it to some useful form (such as identify associated virtual objects).
  • QR codes also may be linked to a location to track where a code has been scanned. Either the application that scans the QR code retrieves the geo information by using Global Positioning System (GPS) and cell tower tri angulation (aGPS) or the URL encoded in the QR code itself is associated with a location.
  • server module 153 matches location of scanned code with monitored or tracked user device’s current location to authentic or verify code scanning.
  • Figure 11 (D) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has conduct an augmented reality scanning 1194 of real world object (e.g.
  • the server module 153 of the virtual object application 136 retrieves the associated virtual object 666 / 1181 corresponding to the satisfied object criteria 646 / 648 and displays the virtual object 666 / 1181 and associated virtual money 664 to the mobile device 200.
  • the virtual object 666 / 1181 and associated virtual money 664 / 1182 may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • user or player 1192 needs to conduct one or more required activities or actions or call-to-actions or participations or transactions or play mini games to get, collect, acquire, store, win or select said displayed virtual object 666/1181 and associated virtual money 664/1182.
  • user require to play mini game 677 which pre-defme or pre-associated with said the virtual object 666 and associated virtual money 664.
  • mini game icon 1198 user is presented with cage or auto movable or changing of position on camera screen 1197.
  • server module 153 recognizes objects in submitted scanned data or raw photo or capture photo of said defined real world object and matches only with said real world object associated object criteria and does not need to matches with all object criteria in database, so it will saves server resources substantially.
  • server monitors and tracks location of user device only after user scans or conducted augmented reality scanning or capture photo and submits to server module 153 and after receiving submitted scanned data or capture photo of real world object, server module 153 request user to starts location service of user device 200 if location service not started or if location service already started then use location service of user device 200 only after receiving of scanned data or capture photo of real world object, so it will saves server resources substantially.
  • user can define real world object, can make available for scan said object only for contacts, provide associate object criteria and can hide said location of said defined object on map (discuss in figure 3) and send message to one or more contacts, wherein message comprises description, tips or clue to find said particular object.
  • user 1212 received message “One of the Deer’s Neck in Time Square” from one of the contacts of user. Based on that, user 1212 find out and reach to displayed“Deer” in Times Square and starts augmented scanning 1224 of each deer‘s neck until user or player 1212 popups or show particular deer’s neck e.g. 1205 associated virtual object 1201.
  • Figure 12 (A) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has conduct an augmented reality scanning 1205 or taken a photograph 1205 of part of scene or object in real world originally at particular date & time that includes the recognized object 1221 that corresponds sufficiently to specified object 1222 and therefore satisfies the object criteria.
  • the server module 153 of the virtual object application 136 retrieves the associated virtual object 1201 corresponding to the satisfied object criteria 1222 and displays the virtual object 1201 and associated virtual money 1202 to the mobile device 200.
  • the virtual object l20land associated virtual money 1202 may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • user or player 1212 needs to conduct one or more required activities or actions or call-to-actions or participations or transactions or play mini games to get, collect, acquire, store, win or select said displayed virtual object 1201 and associated virtual money 1202.
  • sponsor or advertiser user can define real world object or scene e.g. movable characters found in electrified board, can make available for scan said object only for invitees, provide associate object criteria and can hide said location of said defined object on map (discuss in figure 4) and send message to one or more contextual users of network including customers or prospective customers at present visiting particular location, wherein message comprises description, tips or clue to find said particular object.
  • user 1252 received a message “Wedding Bells” words in Times Square Electrified Advertisements” from sponsor. Based on that, user 152 find out and reach electrified board at Times Square which displays“Weeding Bells” and augmented scanning 1262 said word 1257 with intention to get associated virtual object 1256.
  • Figure 12 (C) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has conduct an augmented reality scanning part of scene or object in real world 1257 / 1255 or augmented reality scanning scene which contains said word“Weeding Bell” or taken a photograph 1255 of part of scene or object or scene which contains said word“Weeding Bell” in real world originally at particular date & time that includes the recognized word“Weeding Bell” based on
  • the server module 153 of the virtual object application 136 retrieves the associated virtual object 1256 corresponding to the satisfied object criteria i.e. word“Weeding Bell” and displays the virtual object 1256 and associated virtual money 1258 to the mobile device 200.
  • the virtual object 1256 and associated virtual money 1258 may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • user or player 1252 needs to conduct one or more required activities or actions or call-to-actions or participations or transactions or play mini games to get, collect, acquire, store, win or select said displayed virtual object 1256 and associated virtual money 1258.
  • Figure 12 (B) illustrates user interface wherein based on monitoring and tracking of one or more types of user’s physical world and digital activities, actions, participations, check in place, transactions, status, reactions, communications, and sharing, notifying to user about receiving of one or more types of virtual objects 1230, virtual money 1250, virtual elements, virtual power, virtual goods, virtual rewards including redeemable points, voucher or coupon 1235.
  • FIG. 12 (D) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has conduct an augmented reality scanning of real world object 1295 (e.g.“Bentley Car” moving or stopped during signal on road) or taken a photograph 1295 by tapping on photo capture icon 1290 that includes the recognized object 1291 that corresponds sufficiently to specified object 1292 and therefore satisfies the object criteria 1292.
  • the server module 153 of the virtual object application 136 retrieves the associated virtual object 1283 corresponding to the satisfied object criteria 1292 and displays the virtual object 1282 and associated virtual money 1281 to the mobile device 200.
  • the virtual object 1282 and associated virtual money 1181 may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • server module 153 validates scanning of actual car based on recognition of car number (provided at the time of diffing real world object (said car), monitored or tracked current location of device of scanner and monitored or tracked current location of device of driver or passenger(s) who seated inside car.
  • Figure 13 (A) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has conduct an augmented reality scanning 1324 of particular real world object e.g. Trophy 1305 or taken a photograph 1325 of particular real world object e.g. Trophy 1305 at particular location or at particular location at particular date & time by authorized users including contacts, invited users or team members or their family members or fans or visitors or viewers pre-defmed by object definer or object provider or authorized administrator (discuss in detail in figures 3 and 7) that includes the recognized object 1321 that corresponds sufficiently to specified object 1322 and therefore satisfies the object criteria.
  • an augmented reality scanning 1324 of particular real world object e.g. Trophy 1305 or taken a photograph 1325 of particular real world object e.g. Trophy 1305 at particular location or at particular location at particular date & time
  • authorized users including contacts, invited users or team members or their family members or fans or visitors or viewers pre-defmed by
  • the server module 153 of the virtual object application 136 retrieves the associated virtual object 1301 corresponding to the satisfied object criteria 1322 associated with pre-defmed real world object at particular location or place and displays the virtual object 1301 and associated virtual money 1302 to the mobile device 200.
  • the virtual object 1301 and associated virtual money 1302 may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • user or player 1312 need to conduct one or more required activities or actions or call-to-actions or participations or transactions or play mini games to get, collect, acquire, store, win or select said displayed virtual object 1301 and associated virtual money 1302.
  • a notification is send to user about said place or geofence associated real world objects for enabling user to scan or conduct augmented reality said place or geofence associated real world objects to get associated virtual objects.
  • Figure 13 (B) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has conduct an augmented reality scanning 1332 of particular real world object e.g. Cake 1332 or taken a photograph 1332 of particular real world object e.g. Cake 1332 within pre-defmed said real world object (e.g.
  • Cake associated geo-fence boundary (so birthday party attendee can scan cake and write birthday wishes and gets associated virtual object) at particular date & time by authorized users including contacts, invited users, relative and their family members pre-defmed by object definer or object provider or authorized administrator (discuss in detail in figures 3 and 7) that includes the recognized object 1335 that corresponds sufficiently to specified object 1332 and therefore satisfies the object criteria.
  • the server module 153 of the virtual object application 136 retrieves the associated virtual object 1333 corresponding to the satisfied object criteria 1332 associated with pre-defmed real world object (e.g. Cake) within pre-defmed said real world object (e.g.
  • Cake associated geo-fence boundary at particular date & time and displays the virtual object 1333 and associated virtual money 1331 to the mobile device 200.
  • the virtual object 1333 and associated virtual money 1331 may then be stored to user’s collection of virtual objects 1333 /1331 or added to user’s account (discuss in detail in figures 22-25).
  • user or player 1314 needs or require to conduct one or more required activities or actions or call-to-actions or participations or transactions or play mini games to get, collect, acquire, store, win or select said displayed virtual object 1333 and associated virtual money 1331.
  • Figure 13 (C) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has conduct an augmented reality scanning 1324 of particular real world person e.g. celebrity 1365 or taken a photograph 1365 of particular real world person e.g. celebrity 1365 at particular location or at particular location of identified event at particular date & time by authorized users including attendee, visitors, guests, members, ticket holders, invited users, wherein said real world person pre- defmed by object definer or object provider or authorized administrator (discuss in detail in figures 3 and 7) that includes the recognized object or person 1361 that corresponds sufficiently to specified object 1362 and therefore satisfies the object criteria.
  • object definer or object provider or authorized administrator discussed in detail in figures 3 and 7
  • the server module 153 of the virtual object application 136 retrieves the associated virtual object 1351 corresponding to the satisfied object criteria 1362 associated with pre-defmed real world object or person at particular location of pre-defmed or identified event or place and displays the virtual object 1351 and associated virtual money 1352 to the mobile device 200.
  • the virtual object 1351 and associated virtual money 1352 may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • user or player retrieves the associated virtual object 1351 corresponding to the satisfied object criteria 1362 associated with pre-defmed real world object or person at particular location of pre-defmed or identified event or place and displays the virtual object 1351 and associated virtual money 1352 to the mobile device 200.
  • the virtual object 1351 and associated virtual money 1352 may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • Server module 153 identifies place of event, date & time of event, place and date and time of augmented reality scanning of human face or body 1365 based on associated EXIF data, identify celebrity based on face or body parts recognition techniques and based on identification of face identify associated name of celebrity and after that server module 151 displays or provided associated one or more types of virtual objects and virtual money.
  • user can define person and associated virtual objects, criteria, schedules, geofence boundary or location, required actions to get said displayed virtual objects in the event of conducting of augmented reality scans or taking of picture of said person and allow invitee and attendee of said particular place or event or geofence boundary to scan said pre-defmed person and get associated virtual objects, virtual money virtual rewards and virtual gifts or virtual lucky gifts different for different user or limited gifts available based on first scan first get gift basis, wherein virtual rewards and virtual gifts may redeemable in real world and event may comprises birthday, marriage, anniversary, party and one or more types of events.
  • AR scanning 1374 / 1365 overlays digital imagery 1351 into the results of a mobile device scan 1374 / 1365. This means that system can superimpose graphics, animation, and other virtual content 1351 on the screen 1363 of a smartphone 200, tablet or wearable device when a user scanned 1374 / 1365.
  • Figure 13 (D) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has conduct an augmented reality scanning 1385 or record a video or taken a photograph 1385 while watching of particular television program or serial 1385.
  • augmented reality scanning 1397 or scanning via camera application 1398 of user device said television program or serial 1385, receiving photo or image or video or scanned data 1385 from the user device.
  • the server module 153 of the virtual object application 136 validates actual or original date and time of received scanned data or captured photograph or image 1385 based on sufficiently matching received scanned data or a photograph or image 1385 associated Exchangeable image file format (EXIF) data including date and time of capturing photo or scanning 1385 with current date and time of server.
  • the server module 153 of the virtual object application 136 identifies or recognizes television program or serial channel name and identity 1376 based on recognizing logo 1376 of television channel 1376 based on object recognition and Optical Characters Recognition (OCR) techniques.
  • OCR Optical Characters Recognition
  • the object criteria 1377 including object model or image or object keywords associated with the virtual object in the stored data, display or provide the virtual object 1372 and associated virtual money 1371 to the client device. Storing virtual objects 1372 and associated virtual money 1371 provided to the client device in a virtual object collection associated with the client device 200.
  • user or player needs to conduct one or more required activities or actions or call-to-actions or participations or transactions or play mini games to get, collect, acquire, store, win or select said displayed virtual object 1371 and associated virtual money 1372.
  • user can view, select, capture, record, or scan particular scene, object, item, thing, product, logo, name, person or group(s) of persons and scene via user device camera display screen or wearable device(s) e.g. eye glasses or digital spectacles which is/are equipped or integrated with video cameras, Wi-Fi connection, memory and connected with user’s smart device(s) e.g. mobile device or smart phone.
  • wearable device(s) e.g. eye glasses or digital spectacles which is/are equipped or integrated with video cameras, Wi-Fi connection, memory and connected with user’s smart device(s) e.g. mobile device or smart phone.
  • Figure 14 (A) illustrates user interface for enabling user to manually scan or conduct augmented reality or take photo or scan or auto scan real world object(s) based on recognition of object from received real world object’s related scanned data or raw photo or captured photo via one or more types of wearable device including eye glasses or digital spectacles equipped with video camera and connected with user device(s) including smart phone.
  • user is enabled to set automated augmented reality scanning, wherein automated augmented reality scanning enabling when user enters to particular pre-defmed place or geofence boundary where pre-defmed real world objects available for user scanning then in the event of arriving near to pre-defmed real world object’s place or stay in or inside place (e.g.
  • automated augmented reality scanning happens via digital spectacles wear by user and in the event of viewing particular pre-defmed real world object by using video camera of digital spectacles then server module 153 receives said scanned data or captured photo of said viewed real world object and recognizes object(s) in received said scanned data or captured photo(s) and in the event of recognizing real world object the server module 153 of the virtual object application 136 retrieves the associated virtual object(s) corresponding to the satisfied object criteria associated with pre-defmed real world object at particular location or place and displays the virtual object and associated virtual money to the mobile device 200 or enable user to get said real world object’s associated virtual object(s) or add to said user’s account or collections said real world object’s associated virtual object(s).
  • user is enabled to view and scan or capture photo or conduct augmented reality scanning of particular real world object via tapping button 1406 via spectacles 1405 associated or integrated video cameras 1401 and/or 1403 which is connected with device 200 and enabling user to view or scan or capture or record photo or video via spectacles 1405 which have an integrated wireless video camera 1401 and/or 1403 that enable user to view or scan or capture photo or record video clips and save them in spectacles 1405 and/or to user device 200 connected with spectacles 1405 via one or more communication interface or save to database or storage medium 115 of server 110.
  • the glasses 1402 or 1404 enables user to view or augmented reality scanning or begin to capture photo or record video after user 510 taps a small button 1406 near the left or right camera.
  • the camera can scan or capture photo or record videos for particular period of time or up-to user stops it.
  • the snaps will live on user’s Spectacles until user transfer them to smartphone 200 and upload to server 110 database 115 or storage medium 115 via Bluetooth or Wi-Fi or any communication interface, channel, medium, application or service.
  • server 110 database 115 or storage medium 115 via Bluetooth or Wi-Fi or any communication interface, channel, medium, application or service.
  • identified object inside real-time viewed 1402 / 1404 or scanned by tapping on button 1406 or captured photo or recorded video i.e. particular image inside video
  • system matches said identified or recognized object 1421 with object criteria 1422 / 1423 / 1424 and presents or displays associated virtual objects to user device 200.
  • Figure 14 (A) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a digital spectacles device 1405 has conduct an augmented reality scanning 1402 of the real world object e.g. logo of particular brand or taken a photograph l402/l404of the real world object e.g. logo of particular brand originally at particular date & time that includes the recognized object 1421 that corresponds sufficiently to any specified object 1422/1423/1424 within set of object criteria and therefore satisfies the object criteria.
  • augmented reality scanning 1402 of the real world object e.g. logo of particular brand or taken a photograph l402/l404of the real world object e.g. logo of particular brand originally at particular date & time that includes the recognized object 1421 that corresponds sufficiently to any specified object 1422/1423/1424 within set of object criteria and therefore satisfies the object criteria.
  • the server module 153 of the virtual object application 136 retrieves the associated virtual object 1445/1432 corresponding to the satisfied object criteria 1424 and displays the virtual object 1445/1432 and associated virtual money and associated one or more types of offer e.g. coupon or voucher or free gift or redeemable points 1445 to the digital spectacles device 1405.
  • the virtual object 1445 associated virtual money and associated one or more types of offer e.g. coupon 1432 may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • user or player 1412 needs to conduct one or more required activities or actions or call-to-actions or participations or transactions or play mini games to get, collect, acquire, store, win or select said displayed virtual object 1445 and associated virtual money and associated one or more types of offer e.g. coupon 1432.
  • user require to share photo or video with particular number of contacts of user.
  • sharing of photo or video with particular number of contacts of user by clicking or tapping on share with friends or contacts button or icon
  • user get the virtual object 1432 and associated virtual money and associated one or more types of offer e.g. coupon 1432 and may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • user can redeem said coupon 1445 in real world.
  • user need to scan at particular place of business to get particular type of virtual object and/or offers.
  • user can scan brand name, brand logo from any real world object including item, product, electrified board, board, receipt to get particular type of virtual object and/or offers, wherein based on object keyword 1115 system recognizes name of brand based on Optical Character Recognition (OCR) to identify object keyword 1418 and identify associated particular type of virtual object(s) 1432 and/or offers 1445.
  • OCR Optical Character Recognition
  • server 110 stores user specific virtual objects or enable particular identified sponsor or advertiser or user to provide specific or different virtual objects including virtual money, virtual reward including coupon, offer, discount, redeemable points, voucher, cashback offer for different set of users of network or to selected or provided particular one or more identified users of network or types of users of network.
  • Figure 14 (B) illustrates an example of a publication of a virtual object.
  • the server module 153 of the virtual object application 136 detects that a digital spectacles device 1405 has conduct an augmented reality scanning 1402 or taken a photograph 1402 automatic lay or by pressing button 1451 at particular location 338 / 336 / 310 at particular date & time (discuss in detail in figure 7) that includes the recognized object 1470 or 1480 or 1490 that corresponds sufficiently to specified object 346 or 348 or 349 and therefore satisfies the object criteria.
  • the server module 153 of the virtual object application 136 detects that a digital spectacles device 1405 has conduct an augmented reality scanning 1402 or taken a photograph 1402 automatic lay or by pressing button 1451 at particular location 338 / 336 / 310 at particular date & time (discuss in detail in figure 7) that includes the recognized object 1470 or 1480 or 1490 that corresponds sufficiently to specified object 346 or 348 or 349 and therefore satisfies the object criteria.
  • the virtual object 1481 and associated virtual money may then be stored to user’s collection of virtual objects or added to user’s account (discuss in detail in figures 22-25).
  • user or player 1492 need to conduct one or more required activities or actions or call-to-actions or participations or transactions or play mini games to get, collect, acquire, store, win or select said displayed virtual object 1481 and associated virtual money.
  • Figure 15 illustrates various types of examples of notifications user or player may receive from server 110 administrators or automatically from server module 155, authorized advertisers, sponsors and users of network, contacts of user.
  • server administrator or server module 155 suggests and notifies user or player about nearest place(s) related to pre-defmed real world object(s) or one or more pre-defmed geofence boundaries related to pre-defmed real world objects associated location(s). For example server administrator suggests nearest and most popular or most scanned “Central Park Zoo! ! ! 1501 to nearest user [e.g.
  • Yoegsh so user can visit park, conduct augmented reality scanning of one or more animals and gets associated virtual objects (if available - depends upon plurality of factors like based on pre-defmed objects (e.g. animal at Central Park Zoo) by user or sever or as per schedules, availability for user or all users of network settings).
  • server administrator suggests new and contextual real world object or scene to contextual users of network.
  • server administrator or staff or editor picks or suggests and notifies nearest and user preferences or logged user data specific contextual real world objects to user or player 1599, for instance“Couple who are doing“Cha Cha dance at Time Square” 1507, so user or player 1599 can search and find said scene in real world (Times Square) and can conduct augmented reality scanning or take picture of said scene and gets associated virtual objects.
  • server module 155 automatically suggests all or criteria specific users of network who currently located in particular location, said location specific real world objects or scene. For instance server suggests and notifies to users located in Times Square,“New Movie Poster at Time Square” 1503.
  • contacts of user can define new real world object or scene (based on permission, payment model including free, paid, sponsored, validation, authorization, subscription, allow to contacts only, allow to suggest said newly defined or user defined or suggested real world object(s) or scene to server or server
  • server or server administrator reviews, verifies, validates and makes available to access or use said newly defined or user defined or suggested real world object(s) or scene for users of network or selected contacts or criteria specific users of network). For example user
  • server After defining said new real world object and submit to server or make payment and submit to server for verification and make available for others as per provided or associated settings, server receives, verifies and in the event of mark as verified or make available for others as per settings or decided by server administrator, in this example user [James] send message to user in contact [Yogesh]“Written
  • advertiser or sponsor can provide suggested message(s) to contextual users of network based on user’s profile, current location, current status (busy, playing game, online, available), preferences, past logged purchases, interacted or visited or scanned real world objects or types of real world objects, date & time, accompanied users or contacts, types of activity liked, conducted, interests, and scanned.
  • user can define and submit object and associate virtual objects with intention to after validation make available said defined object and associated virtual object(s) for particular event (e.g. birthday, anniversary, party, festival etc.) and during event date & time and only for all or selected or invited contacts or attendee of event (based on defined geo-fence boundary).
  • event e.g. birthday, anniversary, party, festival etc.
  • event date & time e.g. time & time
  • user [Amita] sends message to user [Yogesh]“My friends, AR Scan My Birthday Cake! !
  • user can define and submit object and associate virtual objects with intention to after validation make available said defined object and associated virtual object(s) to users at particular public place(s) or pre-defmed geo-fence boundary and during particular date & time and only for all or selected or criteria specific users (e.g. female, age range specific users, invited contacts or attendee of event (based on defined geo-fence boundary)).
  • sponsor or advertiser send message to user [Yogesh]“Woman who wear Fancy Hat at Times Square” 1512, so user [Yogesh] searches woman who wear fancy hat at Times Square and in the event of finding or guessing such type of woman who wear fancy hat can conducts augmented reality scanning of said woman body (i.e. pre-defmed object) and in the event of face and object recognition of said object by server module 153 based on associated object criteria, displaying to user [Yoegsh] associated virtual object(s).
  • server module 188 identifies user’s one or more types of activities in real world. In an embodiment server module 188 display associated or determined or contextual one or more types of virtual objects based on said identified one or more types of activities in real world.
  • server module 188 monitors, tracks, identifies, determines, analyze, processes, recognize, logs and stores user’s one or more types of physical and digital activities, actions, call-to-actions, participated events, transactions, senses, behaviours, status, updates, communications, reactions, sharing, collaborations, current, related and visited locations and places, check in places, interacted entities related to user and interacted, related, connected real world objects including products and services.
  • server module 188 identifying real world objects related to user based on monitoring, tracking, analyzing, processing, determining user’s or user related or interacted or connected or associated activities, senses and actions in real world including one or more types of performing of dance, doing of yoga and exercise like gym, composing music or playing music instruments like guitar, piano, drums and flute, providing performance, doing drama, acting, singing and painting or drawing, playing one or more types of sports based on provided and validated video, photo, scanned real world’s one or more objects, part of real world or scene and associated data, one or more types of content, data and media and system data including identified valid current date and time of video based on matching Exchangeable image file format (EXIF) data of said video or photo with server date and time, identified location or place of recording of video based on monitored and tracked current location or place of user device and identify said identified location or place associated information including place name and details, recognized one or more types of activity or activities base on object recognition technologies, recognize face or body or one or more types of body parts of user depicted
  • server module 188 monitoring, tracking, analyzing, processing, determining user’s or user related or interacted or connected or collaborated or associated accompanied users or contacts and interacted entities in real world based on monitored and tracked current location or place of user device and connected one or more users’ devices, identify nearby user related or connected users or uses user phone's Global Positioning System (GPS) to find the people around user, user can see everybody user’s connected or related users or all users within a certain radius of user based on user device’s or phone's Global Positioning System (GPS) and enabling user to select one or more users who are not connected with user and who user feel related to user and logging and storing related information, structured data, metadata and system data.
  • GPS Global Positioning System
  • server module 188 monitoring, tracking, analyzing, processing, determining user’s or user related or interacted or connected or associated participated events in real world based on monitored and tracked current location or place of user device and connected one or more users’ devices and associated event information, and calendar information and logging and storing related information, structured data, metadata and system data.
  • server module 188 monitoring, tracking, analyzing, processing, determining user’s or user related or interacted or connected or associated transactions in real world based on linking with seller’s system or accessing related sells data of seller’s database or receiving from the user, a scanned or photo of receipt of the purchase and based on receipt, validate actual purchase of one or more product or service by user from said business including unique business name, place or location, date & time of purchase, amount of purchase, quantity and names and details of one or more products or services, wherein identifying Exchangeable image file format (EXIF) data in scanned or photo of receipt including original date & time of scanned or captured photo of receipt video and match said extracted or identified original date & time with server date & time to validate or check or verify originality of captured photo or recorded video, identifying user device monitored or tracked location or place or enter and stay in geo-fence boundary at the time of sending of scanned or photo of receipt, identifying or recognizing unique business name, place or location, date & time of purchase, amount of purchase, quantity names
  • server module 188 monitoring, tracking, analyzing, processing, determining user’s or user related or interacted or connected or associated status in real world including busy, free, studying, playing, walking, talking, singing, viewing, reading, eating, listening based on monitored or tracked user device’s current location, place, sensor data including voice recognition, object or face or body parts recognition, date & time, duration, and any combination thereof and user selected or provided status and logging and storing related information, structured data, metadata and system data.
  • server module 188 monitoring, tracking, analyzing, processing, determining user’s or user related or interacted or connected or associated communications with one or more users or contacts in real world based on voice recognition technologies including identifying length or duration of talk, identify keywords based on voice to text converter technologies, identifying anonymous users or related users or connected users or contacts surround user who are talking with user and logging and storing related information, structured data, metadata and system data.
  • voice recognition technologies including identifying length or duration of talk, identify keywords based on voice to text converter technologies, identifying anonymous users or related users or connected users or contacts surround user who are talking with user and logging and storing related information, structured data, metadata and system data.
  • server module 188 monitoring, tracking, analyzing, processing, determining user’s or user related or interacted or connected or associated reactions, expressions, moods, styles, behaviours, emotions in real world based on recognizing one or more types of user reactions, expressions, moods, styles, behaviours, emotions based on provided or recorded photos or videos based on voice recognition, object or face or body parts recognition technologies and storing related information, structured data, metadata and system data.
  • server module 188 monitoring, tracking, analyzing, processing, determining user’s or user related or interacted or connected or associated or visited or past or current locations and places, check in places based on monitored or tracked user device’s current location or place and associated information and storing related information, structured data, metadata and system data.
  • server module 188 identifies user’s one or more types of activities in real world. In an embodiment server module 188 display associated or determined or contextual one or more types of said monitored activity equivalent virtual objects or one or more types of virtual objects, virtual elements, virtual money, or virtual power in virtual world based on said identified one or more types of activities in real world.
  • server module 188 identifies user’s one or more types of activities in real world and add or store said monitored type of activities specific identified or determined virtual objects or virtual elements or objects in virtual world to user’s portfolio of virtual objects or virtual elements or objects in virtual world
  • GPS Global Positioning System
  • Figure 16 illustrates user interface for enabling user to upload or submit one or more types of media including one or more photos and videos related to user’s one or more types of current activities including traveling (foreign, national, local point of interests, tourist places, activities), visiting point of interests or places like restaurants, gardens, museums, art gallery, boating, walking, running, flying, beach, lake, temple, riding on elephant or horse or camel, tracking and mountaineering, itinerary with contacts (friends and family, relatives, class mates or others), eating food at particular restaurants, conducting transactions, shopping, playing sports, doing yoga or exercise, reading book, listing music, visiting salon or beauty parlor, one or more types of fashion, beauty and lifestyle including hair, face, expression, cloths and accessories, one or more types of qualification user possess, one or more types of arts or skills user possess including music or playing particular type of instrument, acting, singing, comedy, painting, dancing, participating with one or more types of events including birthday party, work or marriage anniversary party, gathering, friend meeting, attending any type of party at particular location, viewing movie, viewing television serial, drama
  • server module 188 receives uploaded or submitted or shared or provided one or more types of media including one or more photos, videos, receives monitored or tracked user device’s current location or place and associated one or more types of information, sensor data from user device’s one or more types of sensors, current date & time and associated information, scanned or photo of receipt of purchase, transaction data from linked database of sellers, one or more 3 rd parties or external sources data related to user, access one or more types of user or connected users of user’s related or associated data including user profile, connections or contacts, checked in places, updates, status and like.
  • media and metadata server module 188 After receiving or accessing said one or more types of data, contents, media and metadata server module 188 processes said data by employing one or more types of technologies including object recognition, voice recognition, face and body parts recognition, Optical Character Recognition (OCR), recognize, processes, measure, analyze, calculate sensor data.
  • OCR Optical Character Recognition
  • figure 16 (A) illustrates example, wherein user capture photo 1611 or records video
  • Sever module 188 receives said submitted media or photo or video 1625 and user provided or selected type or name or details
  • date & time recognizes or detects face based on face or body part recognition techniques
  • sever module 188 displays or stores associated or contextual or relevant one or more types of virtual objects 1621 and associated points or virtual money 1620 or instruct user to conduct one or more types of activities, actions, participations, transactions, follow rules or play displayed mini game to get, win, acquire, catch, store, add to collection of user’s portfolio of virtual objects said displayed one or more types of virtual objects 1621 / 1620.
  • Sever module 188 receives said submitted media or photo or video 1630 and user provided or selected type or name or details 1664 of activity or action or task or transaction or status or participating event and recognizes recorded and uploaded video associated location or place based on monitored or tracked or logged user device’s location at the time of recording or uploading of video, recognizes date & time of recorded or uploaded video based on monitored or tracked or logged user device’s date & time and matched said date & time with server’s date & time, recognizes or detects face of user based on face or body part recognition techniques and matching user profile photo (wherein said profile photo or live image received by server at the time of registration via verified mobile device associated camera) with said recognized face to identify that both are sufficiently similar, recognizes or detects objects in received photo or images of video based on
  • sever module 188 displays or stores associated or contextual or relevant one or more types of virtual objects and associated points or virtual money 1650 or instruct user to conduct one or more types of activities, actions, participations, transactions, follow rules or play displayed mini game to get, win, acquire, catch, store, add to collection of user’s portfolio of virtual objects said displayed one or more types of virtual objects 1650.
  • server module 188 linking with dance class database information, server module identifies membership of user who uploaded photo or video and identifies location of dance class and match with location or place of recording or uploading of one or more photos or videos.
  • server module 188 verifies membership with class from connected or related or verified other users of network (e.g. dance instructor, other students, viewers, references).
  • server module 188 identifies and verifies other one or more types of activities conducted by or related to user including playing of one or more types of music by using one or more types identified or recognized instruments, painting of art or design, singing of one or more types song recognizes based on matching voice of user with voice detect in uploaded music file based on voice detection technique, one or more types of acting, visiting of particular place or point of interests including garden, art gallery, museum, boating, beach based on location or recognized object in photo or video, doing of yoga or exercise at particular gym by recognizing place, face, and identify membership with gym based on linking with the gym, playing of particular type of sports like cricket, soccer, golf, badminton.
  • user needs to upload minimum or maximum duration of video.
  • Server module 188 can verify said information with information published on professional network account of user, instruct to submit related documents, certificates, mark sheets and verifies based on employing one or more verification techniques.
  • User can submit photo or video demonstrating user’s health, beauty, fashion, lifestyle, style of hair, face, dress, expressions.
  • User can submit photo or video demonstrating user’s number of followers, comments, shares, likes on posts or publications shared by users in one or more social networks, web sites and applications.
  • Server module 188 can verify said information with information published on social network account of user.
  • Server module 188 recognizes objects in food based on object recognition, identifies home place, originality of photo or video, actual eating of food by user based on recognize face in particular duration of video and identifies health related food.
  • server module 188 monitors, tracks and logs user’s daily physical activities including number of steps of walking by user throughout the day based on user device sensors.
  • User can submit photo or video of one or more types of health reports of user demonstrating user’s health, fitness.
  • User can submit information (photo, video of products from place of home or receipts of purchased products or subscribed services) about various types of brands used or using or liked by user, services used, using or subscribed or liked by user, membership (submit membership cards or information) of various classes, training centers, clubs, hotels, resorts, airlines, shops, names, contact information, identities and number of users or one or more types of entities connected, related, interacted with users (user as customer, viewer, member, guest, attendee, social worker, client, patient, tourist, commuter, member in group, member of applications, websites and services).
  • one or more types of entities of real world including product, service, brand, shop, company, school, college, class, professionals, organizations, place of business, service providers, sellers, online website or application, group, network, person, have representation (Like Facebook Page or Twitter Account or any social account) in virtual world and enable them to provide one or more types of information about products, services, brands, shops, company, organizations and one or more types of profiles.
  • server module 188 monitors, tracks, identifies, determines, analyze, processes, recognize, logs and stores user’s one or more types of physical (in real world) and digital activities, actions, call-to-actions, participated events, transactions, senses, behaviours, status, updates, communications, reactions, sharing, collaborations, current, related and visited locations and places, check in places, interacted entities related to user and interacted, related, connected real world objects including products and services and based on that automatically relating or connecting user or virtual avatar or account or profile or virtual representation of user with said interacted or connected or related one or more types of entities or with virtual representation or account or profile of said one or more types of entities in virtual world. For example if user is customer of particular shop in real world then user is also connected with said particular virtual shop in virtual world.
  • server module 188 monitors, tracks, identifies, determines, analyze, processes, recognize, logs and stores user’s one or more types of physical and digital activities, actions, call-to-actions, participated events, transactions, senses, behaviours, status, updates, communications, reactions, sharing, collaborations, current, related and visited locations and places, check in places, interacted entities related to user and interacted, related, connected real world objects including products and services and based on that provides one or more types of virtual goods, virtual elements, virtual power, virtual money including displays or provides virtual objects based on identified products, services, brands, food items used by or using by or like by user in real world, virtual qualifications and virtual skills based on identified and verified related documents submitted by user and verified by server module 188, virtual fame based on identified physical and digital world activities including number of followers, fans, connections in social networks, number of websites or applications users, number of customers or clients or members, number of contacts and interactions with number and types of entities in physical or real world, virtual money based on money spend
  • Figure 16 (C) illustrates example, wherein user can provide or submit 1665 one or more types of media including photo or video or live stream 1675 via clicking or tapping on photo icon 1661 or video icon 1663 or live video streaming icon 1664 demonstrating user’s visiting place, identities of user who submits said one or more photos or videos, one or more accompanied persons including one or more friends, family members, contacts, wherein server module 188 receives said information and recognizes recorded and uploaded video associated location or place based on monitored or tracked or logged user device’s location at the time of recording or uploading of video, recognizes date & time of recorded or uploaded video based on monitored or tracked or logged user device’s date & time and matched said date & time with server’s date & time, recognizes or detects face of user and accompanied users (contacts, connections) based on face or body part recognition techniques and matching user profile photo (wherein said profile photo or live image received by server at the time of registration via verified mobile device associated camera) with said recognized face to identify that both are sufficiently similar
  • Figure 16 (D) illustrates example, wherein user can provide or submit 1695 one or more photo 1685 (1691) or videos 1693 or live video streaming 1694 demonstrating or proving user’s attending of particular named or type of event 1684 including user’s own or others’ birth day party, wherein server module 188 receives said information and recognizes recorded and uploaded video associated location or place (restaurant, hotel, banquet hall, party place, club, home of user or connected user) based on monitored or tracked or logged user device’s location at the time of recording or uploading of video, recognizes date & time of recorded or uploaded video based on monitored or tracked or logged user device’s date & time and matched said date & time with server’s date & time, recognizes or detects face of user and accompanied users or attendee of party (contacts, connections) based on face or body part recognition techniques and matching user profile photo (wherein said profile photo or live image received by server at the time of registration via verified mobile device associated camera) with said recognized face to identify that both are sufficiently similar In the
  • FIG. 17 illustrates graphical user interface (GUI) of exemplary virtual world 1700 which shows effect of real world activities, actions, events, participations, purchases, usage, status, behaviours, and real world user’s life related anything, digital activities including actions, call- to-actions, reactions, transactions, sharing, communications, collaborations in virtual world.
  • GUI graphical user interface
  • the server module 188 Based on monitoring, tracking, identifying, recognizing, detecting, analyzing, processing, logging and storing, by the server module 188, as the player conducting of or providing of information about or automatically logging of user’s or player’s one or more types of activities, actions, participations in particular events at particular place, providing of status, visiting or interact with one or more types of locations or places, interacted with one or more types of entities, contacts, conducting one or more types of transactions with one or more types of entities, conducting one or more types of digital activities, actions, senses, behaviours, interactions, status, reactions, call-to-actions, transactions, sharing, communications, collaborations, connections in the real world and/or digital world including websites, applications, the player or user can also interacted and connected, followed, related, mapped, associated with said entities in virtual world as a relative, friend, class mate, colleague, partner, employer, employee, neighbor, society member, citizens, native, visitor, attendee including attendee of particular event, show, exhibition, and program, client, customer, prospective
  • user or player 1755 can also connect 1751 with said real world interacted or connected or transacted or visited or associated virtual world virtual representation 1710 and/or associate one or more avatars (e.g. seller or staff) 1727 of said particular coffee shop 1710. If virtual representation 1710 of said particular coffee shop of real world and/or associate avatar 1727 not available then generating, creating, and adding, by the server module 188, said virtual representation 1710 and/or associate avatar of seller or staff 1727 of said particular coffee shop of real world in the virtual world 1700.
  • avatars e.g. seller or staff
  • server module 188 identifies and retrieves mutual connections, following, followers, contacts, customers, purchases or transactions details, posts, shared contents and associated one or more types of reactions and display in virtual worlds (For example displaying customer, following, liked super coffee 1752 with connection line 1751 showing visual connection link between user or player 1755 and virtual representation (icon, photo, video, animation, 3D design or image or animation, virtual object, virtual character, virtual infrastructure equivalent to real world) 1710 of real world“Super Coffee” at particular unique place in virtual world.
  • virtual representation icon, photo, video, animation, 3D design or image or animation, virtual object, virtual character, virtual infrastructure equivalent to real world
  • user can follow in virtual world by clicking or tapping on follow button or send connection request by clicking or tapping on connect button and in the event of acceptance of connection request by invitee both are mutually connected and able to communicate, collaborate, provide reactions, transact and share one or more types of contents with each other, join group, visit shop, view products and make purchase one or more selected products in 2D or 3D or multi-dimensional virtual world.
  • user can search 1701 and view virtual representation of particular named entity.
  • user can search 1701 and add 1707 to user’ s virtual world or connect in virtual world with virtual representation of particular named entity.
  • user can filter 1703 displayed virtual representations of connected or related or visited or transacted one or more types of or plurality of entities of real world, wherein filter comprise one or more types or names of entities (shop, restaurant, tourist places, movie theater, beach, garden) and any combination thereof, type of relationships (customer, friend, family type of relationship, college or school friend, business connection, professional connection, class mate, college, partner, employer, employee, guest, viewer, member, attendee, visitor, speaker, subscriber, prospective customer, patient, client, student, particular type of position, following, follower, in contacts, in social contacts and like), date and time of adding or creating or relating or displaying, nearby, transacted within particular date and time range, conversed within particular date and time range, visited within particular date and time range, new posts or contents or news or details about new products and services,
  • GUI graphical user interface
  • GUI may comprise 2D or 3D or multi-dimensional graphical user interface (GUI).
  • user can create virtual representation 1755 of them, called avatar 1755, and is able to interact with related, connected, associated, transacted and interacted places and objects of real world and other avatars (users of network can create virtual representations of
  • avatars themselves, called avatars, and are able to interact with other avatars of virtual world) in virtual world. They can explore the virtual world, meet other users or avatars of real world player or user, and can virtually visit places, shops, restaurants, home, office, hospital, online shop, participate in event, view one or more types of contents, information, posts, photos, videos, messages and multimedia, follow or unfollow them, connect or disconnect with them, communicate, collaborate and sharing with them, ask query, view answers, visit shop, view products, talk with staff or representative (via voice or phone or video call, message), purchase products, subscribe services, use one or more types of call-to-actions to send and receive message, fill form, share or refer, make call, view products or services, play game, get, win, claim, purchase, acquire, receives as gift one or more virtual objects, virtual money and virtual rewards including coupons, vouchers, discount, offer which may redeemable or use in real world, add one or more virtual objects to collections of one or more types of virtual objects socialize, participate in both individual and group activities, build, create,
  • user can view provided or displayed real world object 1735 and can play and win or get said displayed virtual object 1735, view new products information 1753 and custom offers trailered for user 1753, view type of relationship and status 1754 with said real world entity associated virtual representation 1711, conduct one or more types of actions, call-to- actions, transactions with said displayed real world entity associated virtual representation 1713, wherein actions, call-to-actions, transactions may comprises book particular movie ticket 1756, follow said movie theatre or brand by clicking or tapping on“Follow Us” button 1755, view and claim presented offer 1755, view and get appointment with doctor 1758, converse with doctor 1714 of real world entity (dispensary of doctor) 1715, view order history 1759, view various types of bouquet of flowers 1717, view avatars of other users of network including friends (e.g.
  • Figure 18 illustrates graphical user interface (GUI) of exemplary virtual world (2D or 3D or Multi-dimensional) having a virtual world geography 1850 that correspondences the real world geography and having a virtual world environment that correspondences the real world environment, as a result, based on monitoring, tracking, identifying, recognizing, detecting, analyzing, processing, logging and storing as the player or user 1855 conducting of or providing of information about or automatically logging of user’s or player’s one or more types of activities, actions, participations in events, providing of status, visiting or interact with one or more types of locations or places, interacted with one or more types of entities, contacts, conducting one or more types of transactions with one or more types of entities, conducting one or more types of digital activities, actions, senses, behaviours, interactions, status, reactions, call- to-actions, transactions, sharing, communications, collaborations, connections in the real world and/or digital world including websites, applications (discussed various related embodiments in details throughout the specification), the player 1855 or user 1855 can also interacted and connected,
  • virtual representation on entity can play with player in virtual world, provide virtual objects in virtual world that can be used in virtual world, provide virtual reward in virtual world that can redeem in real world, sell virtual goods in virtual world, sell, present, provide support, market, and advertise real products and services in virtual world.
  • virtual avatar of player can directly or virtually reach at any places related to one or more types of entities.
  • user or player 1855 can also connect 1832 with said real world interacted or connected or transacted or visited or associated virtual world virtual representation 1833 and/or associate one or more avatars (e.g. seller or staff) 1833 of said particular restraint 1832. If virtual representation 1833 of said particular restaurant of real world and/or associate avatar 1833 not available then generating, creating, and adding, by the server module 188, said virtual representation 1832 and/or associate avatar of seller or staff 1833 of said particular restaurant of real world in the virtual world 1850.
  • avatars e.g. seller or staff
  • server module 188 identifies and retrieves mutual connections, following, followers, contacts, customers, purchases or transactions details, posts, shared contents and associated one or more types of reactions and display in virtual worlds (For example displaying customer, following, liked restaurant 1832 with connection line 1860 showing visual connection link between user or player 1855 and virtual representation (icon, photo, video, animation, 3D design or image or animation, virtual object, virtual character, virtual infrastructure equivalent to real world) 1832 of real world restaurant at particular unique place in virtual world.
  • virtual representation icon, photo, video, animation, 3D design or image or animation, virtual object, virtual character, virtual infrastructure equivalent to real world
  • user can view one or more types of user actions and call-to actions controls 1865 with virtual representation e.g. 1832 of particular restaurant of real world, wherein one or more types of user actions and call-to actions controls 1865 comprises view new menu details, follow said virtual representation e.g.
  • 1832 associated real world entity e.g. restaurant
  • make order talk with currently available representative, book table
  • view details and view posts including photos, videos, offers, messages.
  • use can select from map 1850 and add 1821 particular real world related entity 1822 in virtual world map 1850 and connect with them to view associated virtual objects, play mini games or conduct required actions to select, get, collect, win and capture associated virtual objects, virtual money and virtual rewards, view products, view offers, view posts, view profile or business place details including opening and closing hours, reviews and ratings, and take one or more actions and call-to-actions including make call, send and receive messages, fill form.
  • Server module 188 of server 110 monitors and tracks user’s one or more types of digital activities, actions, triggering of events, transactions, status, communications, sharing, collaborations, check in places, reactions, call-to-actions including interaction with one or more types of controls including action controls and reaction controls may comprise like button, comment button, share button, rating interface, follow button, buy button, order button, book button, access web address or Uniform Resource Locator (URL) or link, play button, search button, visit website, web page by entering web address from one or more 3 rd parties or external websites, webpages and applications, wherein server module 188 monitors and tracks associated triggering of events including click, tap, double click, double taps, sensestart, senseend, mouseover, mouseout, mousedown, mouseup, senseenter, senseleave, scroll, haptic contact engagement, persist and release, playing of video, downloading, uploading, click on link, viewing or taking of photo or video.
  • triggering of one or more types including click, tap, double click, double taps, sensestart, senseend
  • server module 188 of server 110 identifies, notifies, displays or stores to user’s account contextual or associated one or more types of virtual object(s) including virtual money.
  • APIs Programming Interface
  • SDKs Software Development Kit
  • server module 188 of server 110 identifies, notifies, displays or stores to user’s account contextual or associated one or more types of virtual object(s) including virtual money.
  • user will need to or require to play one or more mini games to select, get, win, add to user’s account, capture, acquire and collect said one or more types of virtual object(s) including virtual money.
  • Figure 19 (A) shows example of clicking on particular type of button wherein in the event of triggering of mousedown or touchstart or pre-defmed voice command or haptic contact engagement and persist event on or receiving from a touch controller a haptic contact signal indicative of a gesture applied on“Like” button or icon or link or control 1910, server module 188 of server 110 identifies, notifies, stores to user’s account or displays contextual or associated one or more types of virtual object(s) 1901 including virtual money 1902
  • Figure 19 (B) shows example of clicking on particular type of button wherein in the event of triggering of mousedown or touchstart or pre-defmed voice command or haptic contact engagement and persist event on or receiving from a touch controller a haptic contact signal indicative of a gesture applied on“Add To” button or icon or link or control 2518 with intention to add particular or selected video or item or product 1917, server module 188 of server 110 identifies, notifies, stores to user’s account or displays contextual or associated one or more types of virtual object(s) 1903 including virtual money 1904 beside/over/on/surround/at prominent place/overlay on said“Add To” button 1918.
  • Figure 19 (C) shows that in the event of triggering of mousedown or touchstart or pre-defmed voice command or haptic contact engagement and persist event on or receiving from a touch controller a haptic contact signal indicative of a gesture applied on“Follow” 1922 or“Connect”
  • server module 188 of server 110 identifies, notifies, stores to user’s account or displays contextual or associated one or more types of virtual object(s) 1905 including virtual money 1906 beside/over/on/surround/at prominent place/overlay on said
  • Figure 19 (D) shows that in the event of triggering of mousedown or touchstart or pre-defmed voice command or haptic contact engagement and persist event on or receiving from a touch controller a haptic contact signal indicative of a gesture applied on“Comment” button or icon or link or control 1930, server module 188 of server 110 identifies, notifies, stores to user’s account or displays contextual or associated one or more types of virtual object(s) 1907 including virtual money 1908 beside/over/on/surround/at prominent place/overlay on said“Comment” button 1930 or“Comment” box 1938 or content of comment 1931.
  • Figure 19 (E) shows that in the event of triggering of mousedown or touchstart or pre-defmed voice command or haptic contact engagement and persist event on or receiving from a touch controller a haptic contact signal indicative of a gesture applied on“Buy” button or icon or link or control 1932, server module 188 of server 110 identifies, notifies, stores to user’s account or displays contextual or associated one or more types of virtual object(s) 1915 including virtual money 1916 beside/over/on/surround/at prominent place/overlay on said“Buy” button 1932 or details of product 1933.
  • Figure 19 (F) shows that in the event of triggering of mousedown or touchstart or pre-defmed voice command or haptic contact engagement and persist event on or receiving from a touch controller a haptic contact signal indicative of a gesture applied on“Play” button or icon or link or control 1927 or in the event of monitoring of loading and displaying of one or more types of contents to user or searching and viewing of product(s) details, photo(s), and post(s), server module 188 of server 110 identifies, notifies, stores to user’s account or displays contextual or associated one or more types of virtual object(s) 1925 including virtual money 1926
  • Figure 19 (G) shows that in the event of triggering of mousedown or touchstart or pre-defmed voice command or haptic contact engagement and persist event on or receiving from a touch controller a haptic contact signal indicative of a gesture applied on“Share” button or icon or link or control 1942, server module 188 of server 110 identifies, notifies, stores to user’s account or displays contextual or associated one or more types of virtual object(s) including virtual money beside/over/on/surround/at prominent place/overlay on said“Share” button 1942.
  • Figure 19 (H) shows that in the event of starting of downloading or uploading or installing of application, server module 188 of server 110 identifies, notifies, stores to user’s account or displays contextual or associated one or more types of virtual object(s) 1945 including virtual money 1946 beside/over/on/surround/at prominent place/overlay on said viewing one or more types of contents 1949 9e.g. application details).
  • server module 188 of server 110 identifies, notifies, stores to user’s account or displays contextual or associated one or more types of virtual object(s) including virtual money at prominent place of user interface.
  • user will need to or require to play one or more mini games to select, get, win, add to user’s account, capture, acquire and collect said one or more types of virtual object(s) including virtual money.
  • Figures 20-21 illustrates user interface, displaying description about said monitored, tracked and logged various types of physical or real world or digital activities, wherein description may comprises type of activity or call-to-action 2006, image of type of activity 2005, interacted type and named entity including name and image of location, place, product, brand, service, person, object in real world 2007 and application, website, one or more type of content in digital world , name of connected or accompanied person(s) or contact(s), related link or web address or Uniform Resource Locator (URL) 2007, related or associated or determined structured, data, metadata, statistics, date & time, source of activity, received one or more types of virtual objects 2012, virtual money 2013, virtual rewards including coupon, voucher, redeemable points 2010 / 2011.
  • description may comprises type of activity or call-to-action 2006, image of type of activity 2005, interacted type and named entity including name and image of location, place, product, brand, service, person, object in real world 2007 and application, website, one or more type of content in digital world , name of connected or accompanied person(s)
  • User can filter or sort displayed information about real world and digital activities based on type of activity, brand name, entity name, product or service or website or application name, date and time, number of or amount of virtual money, type of virtual object.
  • user can share select one or more activity items from displayed list of logged activity items with all 2191 or one or more selected contacts 2192, all users of network 2193, followers of user
  • set view duration 2196 to enable viewing user to view shared activity items for said pre-set duration only.
  • user can search 2003 information related to user’s logged activities.
  • FIG. 22-25 illustrates example Graphical User Interface (GUI) 276 for enabling user to view, manage and access various types of virtual objects 2250 / 2305 (in an embodiment virtual objects may have virtual value in terms of virtual money or virtual currency) which user got, caught, gathered, won, captured from various places by conducting augmented reality scanning various real world objects or from visiting pre-defmed geofence boundary associated places or various other ways discuss in detail throughout the specification, wherein in an embodiment virtual objects 2250 / 2305 are equivalent or sufficiently identical to real world objects in virtual world in terms of appearance or looks and feels, attributes, characteristics and have some additional virtual looks, attributes, characteristics which may use for various types of game differently including may use for battel, use for trade, use as vehicle to carry goods or transport, use a worker for doing one or more types of work.
  • GUI Graphical User Interface
  • certain types of virtual objects 2250 / 2305 including animals, trees, birds may grow and multiply but require one or more types of virtual resources including food, water, seeds, or eggs (which user may acquire via playing game or may purchase) as well as physical activities, actions, participations, transactions or spending of particular amount of money, or playing one or more types of mini game by real world player or user.
  • certain types of game there is need of one or more types of said virtual objects including vegetables, fruits, dry fruits, animals, birds, trees, flowers, food items, accessories.
  • user can also view, manage and access virtual power, virtual elements, virtual rewards 2480 including redeemable points, vouchers, coupons, offers (which are spend, used and redeemable in real world) which user got, caught, gathered, won, captured from various places by conducting augmented reality scanning various real world objects or from visiting pre- defmed geofence boundary associated places or various other ways discuss in detail throughout the specification.
  • virtual rewards 2480 including redeemable points, vouchers, coupons, offers (which are spend, used and redeemable in real world) which user got, caught, gathered, won, captured from various places by conducting augmented reality scanning various real world objects or from visiting pre- defmed geofence boundary associated places or various other ways discuss in detail throughout the specification.
  • user can use some types of the virtual objects as Geofilters 2495.
  • some types of the virtual objects used for describing activities of user including brands used, liked, scanned, and purchased 2450.
  • some types of the virtual objects 2301 describe the type of real world object scanned by user.
  • some types of the virtual objects 2250 may convert to or grow to or upgrade to different type of virtual characters based on change of level, playing of particular type of game or quest or mission in game.
  • user or player can purchase various types of virtual characters, virtual power, virtual arms, virtual vehicle, and virtual resources 2585 for different types of game by using or spending or in exchange of virtual money 2205 and real world money.
  • embodiment user can use virtual objects 2350 equivalent to real world objects purchased or used or using or viewed in various types of game.
  • virtual objects 2350 equivalent to real world objects purchased or used or using or viewed in various types of game.
  • as per change of level user may get different type of awards, certificates, medals, cups 2505.
  • user can also view, manage and access virtual money or virtual currency 2205 (including virtual gold, virtual diamond, virtual treasure which have virtual value in terms of virtual money or virtual currency) which user got, caught, gathered, won, captured from various places by conducting augmented reality scanning various real world objects or from visiting pre- defined geofence boundary associated places or various other ways discuss in detail throughout the specification, wherein in an embodiment virtual money or virtual currency may have virtual value which user can use in virtual world to spend for various purposes including buy one or more virtual characters including soldiers, arms or weapons, vehicles, battel resources for game, virtual resources including land, bricks, cements, materials, woods, glass and like for
  • value of getting of virtual money determined based on or in an embodiment user may get additional amount of virtual money based on actual value of scanned real world object, submitting of receipt of purchase of actual product or service in real world, amount of purchase of actual product or service in real world, distance travel by player by walking, distance travel by player by vehicle, duration spend at particular place, accompanied one or more contacts and associated rank, how fast reach at particular real world object compare to others, level of user, scanned by particular number of users (group scanned), date and time and associated information including details of type of event, reach at particular or real world object by deciphering message, type of action or call-to action conducted or type of mini game played, real world object defined by type of user including user of network or sever administrator or sponsor, and any combination thereof.
  • Game may comprises action games including battel, shooting, sports games, story games, Artistic/Creative, Exploration, virtual world geography and activities corresponding to real world geography and activities including real world competition effects on virtual world (e.g. user may win or loss virtual objects based on real world sport’s outcome, wherein sports may comprises Soccer/football, Badminton, Field Hockey, Volleyball, Basketball, Tennis, Cricket, Table Tennis), casino in real world (user may win or loss virtual objects based on real world casino game outcomes, wherein casino game may comprises Table game.
  • augmented reality based games user may win or loss or adds or deducts from user’s account particular amount or number of virtual objects based on various types of real world activities including promotion in positions, qualifying college degrees, get highest score in particular exam (locally, all over school or college, pre-defme real world area, interstate or s nationwide, national -wide, international), spending of real money in real world for particular types of real world activities, conducting of one or more types of real world activities including health related like yoga, gym, walking of particular number of steps within particular duration, better health reports, education related including passing or getting score in particular type of exams, and associated awards, medals, certificates, work related including particular type of promotion, particular type of skills, particular type of or number of years of experience, real world salary range, granted patents in real world, numbers of followers, contacts, connections, interacted entities in real world, numbers of sharing, publishing, posts and associated one or more types of and number of reactions including number of
  • user needs particular amount of total value of virtual objects and virtual money to reach at particular level.
  • a character progression system in which players earn points or amount of virtual money for their actions and use those points to reach character "levels", which makes them better at whatever they do.
  • game may comprises persistent game environment, some form of level progression, social interaction within the game, in-game culture, system architecture, membership in a group, and character customization.
  • user or player can battel or combat with monsters and completing quests or missions for non-player characters, either alone or in groups, are the primary ways to earn points or virtual objects and virtual money.
  • the accumulation of wealth is also a way to progress in Massively multiplayer online role-playing game (MMORPG) and charcoaly top ranked players by displaying their avatars on the game’s website or posting their stats on a high score screen.
  • MMORPG Massively multiplayer online role-playing game
  • Another common practice is to enforce a maximum reachable level for all players, often referred to as a level cap. Once reached, the definition of a player’s progression changes.
  • user or player can sell an item to each other for in-game (virtual) currency, bartering for items between players for items of similar value, purchase of in-game items for real-world currency, exchanges of real-world currencies for virtual currencies to attaching real- world value to "virtual" items has had a profound effect on players.
  • user or player can get virtual money or equivalent value of virtual gold or virtual diamond or virtual power or valuable things. In an embodiment user or player can get virtual money or equivalent value of virtual gold or virtual diamond or virtual power or valuable things only when user made equivalent actual purchase of products or services in real world.
  • user or player can get virtual money as well as virtual objects to remember what user did (e.g. viewed movie then provide virtual object related to said movie poster).
  • user or player can get only virtual money in certain type of real world object scanning.
  • user or player can get only virtual objects equivalent to particular amount of virtual money value) in certain type of real world object scanning.
  • user or player can play one to one, in team, with users of network, join with other groups and teams.
  • providing of virtual objects determining value of virtual objects in terms of virtual money value and deciding of providing of particular amount of virtual money by server algorithm, wherein server decides or use standardized method (e.g. based on verified scanned receipt price convert to USD (e.g. 100 USD) divided by particular number (e.g. 10) equal to virtual money 10.
  • server decides or use standardized method (e.g. based on verified scanned receipt price convert to USD (e.g. 100 USD) divided by particular number (e.g. 10) equal to virtual money 10.
  • server algorithm standardized real world types, categories, names and provide associated particular amount of virtual money value. For example Type (Pizza) - Categories (Food) - Name (Domino’s Pizza) or (Super Pizza) then standardized virtual money value is as per pre-defmed virtual money value (e.g. 15).
  • server can access places and associated information (e.g. places associated information comprise photo, location information, name) database of external providers. Based on said place related photos and associated information, server identifies real world photo, use identified real world photo as object criteria and convert said photo to virtual objects for providing to user who conducted augmented reality scanning or take photo of said real world object.
  • Server module displays said photo of real world object with associated place on real world map, so user can view information, get directions and route information and reach at said place, search and identify object and can conduct augmented reality scanning or take photo of said real world object and in the event of identification of said real world object based on aid object criteria, server provides said virtual object to user or player and provide associated particular amount of virtual money.
  • user may get particular type of medals which may add or increases (N%) of bonus in getting of virtual money amount value for each or particular type of Augmented Reality Scan.
  • In an embodiment generates or log real world conducted activities, actions, participated events, transactions, status associated photos, videos, notes or details or auto details (location or place or check-in place name, accompanied contacts names, number of contacts or profile photo and link, date & time, based on scan product details (name, logo, price), scores, levels etc.) and enable user to view, share or publish said details.
  • rank to user within users of network based on highest total amount of virtual money value possess by player or user and compare or provide rank within building or societies, address, area, pin code, road, east/west, suburb, city, state, country in real world.
  • User can view details, search, select, make payment, download, and install and configure one or more types of games uploaded by and provided by external developers and then user may provide game data with permission to said installed game, invite other users or contacts and start playing said game.
  • Figure 26 illustrates user interface showing exemplary various types of virtual attributes 2650 of virtual object or virtual character (e.g. Elephant) 2605 / 2651 including virtual money value
  • Figure 27 illustrates example Graphical User Interface (GUI) 276 for enabling user to prepare and draft message or request or suggestion or defining task 2709 for assigning task or instructing task to fulfil or requesting or suggesting to do particular task wherein task comprise ask or instruct or challenge to augmented reality scanning or take photo of real world object or scene as per said message or request or suggestion or defined or described task and associated one or more rules and settings including finish said message or request or suggestion associated described task within pre-set duration 2718.
  • GUI Graphical User Interface
  • message may in the form of text, location, web address or link, keywords, categories, hashtags, metadata, photo, video, structured data, and any combination thereof, wherein user can select one or more photos or videos voice files 2712, location 2711 or capture or record one or more photos 2713 or videos 2714, search, match, import, select from one or more types of list of messages or templates 2715 / 2730 including past sent list of messages, select message from locally saved or draft messages, select messages sent by contacts or contacts of contacts of user, search and select messages or templates of messages from directories and sent or prepared by other users of network or providers, select from bookmarked, advertised messages or templates of messages, select from displayed suggested list of messages or templates of messages, wherein suggested messages or templates of messages displayed based on users or selected contacts 2725 associated one or more types of user data including one or more types of user profile including job profile, business profile, professional profile, general profile, game profile, past or current locations, places, one or more types of logged activities, actions, events, transactions, status, behaviours,
  • GUI Graphical User Interface
  • user can ask or hire service provider to prepare message or instruction or task as per user’s requirements or enabling user to purchase one or more hidden pre-defmed real world objects from 3 rd parties’ providers.
  • preparing message or instruction or suggestion or task user can select one or more contacts, connections, followers, groups, networks, team from list of contacts, contacts of contacts, connections of one or more social networks, followers, networks, team as target recipients of said prepared message 2709 or make said message or instruction or defined task 2709 available or searchable for public or ask in public to finish or fulfill or do said message or instruction 2709 associated defined or described task 2709.
  • target recipients based on query, types, SQL (structured query language), selected one or more keywords and categories, locations, profile fields including age or age range, type of gender, education, skills, locations, income range, interest or hobby types, related type of or named entities including school, college, company, organization, club, division, class, and position, associated one or more types of one or more conditions, criteria, rules, filters, metadata and boolean operators and any combination thereof.
  • SQL structured query language
  • message may comprises a text, a location information, a photo, a video, a voice, a link or web address, a keyword or hashtag and associated metadata including date & time of creation and sent, message sender photo, identity and link of profile, identities of one or more selected contacts as target recipients of message, keywords, hashtags, tags, rules including pre-set duration within which message receiver has to finish task defined in message, preferences and settings and any combination thereof.
  • message may comprises a text, a location information, a photo, a video, a voice, a link or web address, a keyword or hashtag and associated metadata including date & time of creation and sent, message sender photo, identity and link of profile, identities of one or more selected contacts as target recipients of message, keywords, hashtags, tags, rules including pre-set duration within which message receiver has to finish task defined in message, preferences and settings and any combination thereof.
  • server module 159 of server 110 to assign or instruct or suggest or request to fulfill or finish or complete or to-do said message or instruction or suggestion or request associated defined or described task including augmented reality scanning or take one or more photos or record one or more videos of real world object as per said message or instruction or suggestion or request associated defined or described task within said pre-set duration 2718.
  • structured fields may comprise one or more types or categories of real world object 2770 (user can search real world objects 2770 and selected real world object), current locations of each target recipients 2751, defined locations and places 2752, selected or set one or more types of locations or places 2752, defined radius or geofenced around particular or surround current location or place of each target recipients 2753, selected or searched or set location(s) or place(s) on map 2755, included and excluded or nearby 2762 one or more locations or places (location may comprise address, country, state, city, area, pin, suburb, point of interest, place name), type and named one or more contacts or groups 2775 or entities 2777 including school, college, restaurant, shop, mall, club, product, brand, and company name, one or more types or categories or sub- categories 2780/2781 or hashtags or keywords or tags 2782 of photo or video and any combination thereof. For example user selects object category
  • user can save message 2709 locally or save remotely 2760, In an embodiment user can share 2722 message with one or more contacts, connections of one or more social networks, groups, networks or in public or make them searchable or accessible for other users of network . In an embodiment user can cancel
  • tips may include details of location or place name and address, one or more associated real world objects, related persons names, map, route & directions, costs or prices and like.
  • server module 159 receives message from user and processes the message, wherein process may include grammar verification, spell check, detect language as per defined rules and policies, duplicate content, junk characters, length of message, automatically recognizing and reporting spam, inappropriate, and abusive messages system and human mediated recognizing and reporting spam, inappropriate, and abusive messages as per rules, keywords and policies, verify image or images of video to identify spam or inappropriate message, associate metadata and system data.
  • server module 159 may employ a moderation system to sort messages that are irrelevant, obscene, illegal, or insulting and may also employ content moderators to manually inspect or remove content flagged for hate speech or other objectionable content.
  • moderators are given special privileges to delete or edit others' messages and/or remove, block, suspend user account or warning user to ensure that the messages or shared content complies with legal and regulatory exigencies, site/community guidelines, and user agreements.
  • spontaneous moderation may employ wherein users will spontaneously moderate their peers through viewing, assessing and alerting message sender.
  • Common privileges of moderators include deleting, merging, moving, and splitting of messages, locking, renaming, banning, suspending, unsuspending, unbanning, warning the members, or adding, editing, removing the messages.
  • server module 159 may employ a wordfilter or Content-control software or filter or censor system which contains a script that automatically scans users' messages and shared or send or published contents as they are submitted and automatically changes or censors particular words or phrases.
  • server module 159 receives message from user and checks, verifies, and validates received message by employing message verification, reviewing, analyzing technologies and human mediated review, verification, checking and validation to identify whether message is spam, inappropriate, inappropriate as per pre-defmed rules, policies and take one or more actions and reactions on it, wherein reactions comprises mark as spam and inappropriate and actions comprises instruct system or server module 159 to do not send said message to message associated target recipients defined and selected by sender of said message, remove or block or delete or suspend account of said message sender user.
  • user can prepare message 2709 and first verify 2717 the message 2709 and then can send 2720 message 2709 to one or more selected target recipients 2735, so in the event of unintentionally inappropriate message found then user can able to edit and again verify and then send the message to avoid mark as spam or inappropriate or abuse or avoid blocking of sending of message to target recipients or avoid removing or suspending or blocking of user’s account temporarily or permanently.
  • server module 159 After successfully processing and verifying of message, server module 159 prepares and generates message notification for each target recipient of message and sends notification and message, associated metadata, rules and call-to-action or action controls to each target recipient via one or more types of notification system and channels including push notification.
  • Figure 28 (A) illustrates example Graphical User Interface (GUI) 277 for enabling each target recipient to view said received notification including message 2802 and associated metadata including sender photo, name and profile link 2801 and rules including purchase particular product, take one or more types of actions, play one or more type of mini game, duration 2807 within which user need to finish message associated task to get points and take associated one or more user actions or call-to-actions including accept 2811 message 2802 or message associated task or instruction or suggestion or request 2802 including as per message augmented reality scan or capture instructed or suggested or challenged or requested particular type of task specific photo or record video 2802 of real world object or scene, reject message associated task 2810 or reject message associated task with selected or provided one or more reasons 2810, accept message associated task and provide start and end schedule 2812 of task as per pre-set duration to finish task associated with message, request sender to change or update message 2813 or chat with message sender to converse regarding change or update of message, instead of accept message, ask or instruct message sender to do message associated task 2814, in the event of message
  • Figure 28 (B) illustrates example Graphical User Interface (GUI) 277 shows received 2822 and send 2823 lists of messages or instructions or suggestions or requests to do particular or defined or described one or more tasks including augmented reality scan or capture one or more photos or record videos of real world object(s) or scene based on received message or instruction or suggestion or request.
  • GUI Graphical User Interface
  • user can select particular received messages or instructions or suggestions or requests 2835 from list of messages or instructions or suggestions or requests 2844 and can accept 2852 message 2835 or message associated task or instruction or suggestion or request 2835 including as per message augmented reality scanning or capture instructed or suggested or challenged or requested particular type of photo or record video 2835, reject message associated task 2851 or reject message associated task with selected or provided one or more reasons 2851, accept message associated task and provide start and end schedule 2853 of task as per pre-set duration to finish task associated with message, request sender to change or update message 2854 or chat with message sender to converse regarding change or update of message, instead of accept message, ask or instruct message sender to do message associated task 2855, in the event of message sender is anonymous and receiving user found message as spam or inappropriate or abusive then receiving user can report as spam or inappropriate or abusive 2857 to server module 159 and block or remove user 2857.
  • 2852 message 2835 or message associated task or instruction or suggestion or request 2835 including as per message augmented reality scanning or capture instructed or suggested or challenged or requested particular type of photo or
  • user can provide user’s status including busy, not available for particular days with one or more types of selected or provided and reasons 2862.
  • the server module 153 of the virtual object application 136 detects that a mobile device 200 has conduct an augmented reality scanning 2872 or record a video or taken a photograph 1872. In the event of augmented reality scanning 2872 or scanning 2890 or taking photo 2895 via camera application 2895 of user device 200, said movie 2372, receiving photo or image or video or scanned data 2872 from the user device 200.
  • the server module 153 of the virtual object application 136 validates actual or original date and time of received scanned data or captured photograph or image 2872 based on sufficiently matching received scanned data or a photograph or image 2872 associated Exchangeable image file format (EXIF) data including date and time of capturing photo or scanning 2872 with current date and time of server.
  • the server module 153 of the virtual object application 136 identifies or recognizes an object 2880 in the photograph or scanned data 2872 that corresponds sufficiently to specified object 2881 and therefore satisfies the object criteria 2881.
  • Based on the identified object 2880 satisfying the object criteria 2881 including object model or image or object keywords associated with the virtual object in the stored data display or provide the virtual object 2872 and associated virtual money 2871 to the client device 200. Storing virtual objects 2872 and associated virtual money 2871 provided to the client device 200 in a virtual object collection associated with the user account.
  • sever module 153 receives from user augmented reality scanning or a photograph or scanned data 2872 of movie or show or play or event and validates actual or original date and time of received scanned data or captured photograph or image 2872 based on sufficiently matching received scanned data or a photograph or image 2872 associated
  • Exchangeable image file format data including date and time of capturing photo or scanning 2872 with current date and time of server and validate location of said scanned object or scene of real world 2872 based on sufficiently matching pre-defmed location of said scanned object or scene of real world with monitored and tracked current location or place of user device 200 who scanned or take a photograph 2872 of said object or scene (e.g. movie scene image) and based on said date & time and location or place information, identify movie or show or play or event details including name, show time, theater name.
  • EXIF Exchangeable image file format
  • Server module 153 identifies or recognizes an object 2880 in the photograph or scanned data 2872 and based on the identified object 2880 satisfying the object criteria 2881 including object model or image or object keywords associated with the virtual object 2872 in the stored data, display or provide the virtual object 2872 / 2871 to the client device.
  • Server module 153 stores said virtual objects 2872 and associated virtual money 2871 provided to the client device in a virtual object collection associated with the user account or client device.
  • Figure 28 (C) illustrates example Graphical User Interface (GUI) 271 shows automatically displaying camera application 2875 to enabling receiving user’s (e.g.“Lily”) computing device 200 to augmented reality scanning or scan via by clicking or trapping on button or icon or control 2890 or capture photo by clicking or trapping on“Photo” button or icon or control 2895 or record video by clicking or trapping on“Video” button or icon or control.
  • GUI Graphical User Interface
  • a message for sending to message sender e.g.“Yogesh Rathod”
  • message comprise said information about virtual objects 2872 and associated metadata including message identity, instruction or request or suggestion or task message sender identity (e.g.“Yogesh Rathod”), exchangeable image file format (EXIF) data, location information, date & time of virtual object, associated captured photo or recorded video, message sender identity (e.g.“Lily”) and receiver identity (e.g.
  • “Yogesh Rathod”) and system data In an embodiment based on said message associated metadata identifying, by the server module 159, the sender of message as recipient of the message.
  • user can filter 2825 or sort 2826 received messages based on user name or identities, status including received, accepted, not accepted, scheduled, instructed to sender of message to do message associated task, task pending, task done (scanned task specific message and received associated virtual objects) and accepted by task provider or instructor, task not done, duration left to finish task, date & time of receiving of messages, highest reactions received, new reactions received, ranks, contacts type, contact rank, online or offline status, one or more members of particular named team or team name, ascending or descending order and any combination thereof.
  • enabling user to assign received message associated instruction or task 2863 to one or more contacts or other users of network or paid service providers In an embodiment enabling user to assign received message associated instruction or task 2863 to one or more contacts or other users of network or paid service providers with permission or authorization from instruction or task message sender or preset settings of instruction or task message sender and/or server 110 administrator or based on default or user specific settings of server 110.
  • FIG. 29 illustrates example Graphical User Interface (GUI) showing status 2940, reactions controls 2946 and reactions and metadata to the instruction or task or suggestion or request message sender (e.g.“Yogesh Rathod”).
  • GUI Graphical User Interface
  • message 2922 from list of received message 2970 by receiving or viewing user (e.g.“Yogesh Rathod”), displaying message associated one or more photos or videos or media or any combination thereof, wherein message may comprises message sender photo 2933, name and profile link 2934, status and type of message send 2937, instruction or task or request or suggestion message sender name and associated instruction or task or request or suggestion message 2935, date & time 2930, status 2940 associated with instruction or task or request or suggestion message sent by user (e.g.“Yogesh Rathod”), thumbnail of received photo or video or media 2936 send by instruction or task doer user (e.g.“Lily”), one or more type of reaction controls 2946 for enabling receiving or viewing user to take one or more actions and reactions including like, dislike, comment, share, report.
  • message may comprises message sender photo 2933, name and profile link 2934, status and type of message send 2937, instruction or task or request or suggestion message sender name and associated instruction or task or request or suggestion message 2935,
  • Figure 30 illustrates example Graphical User Interface (GUI) 279 wherein user can select“Free Form” option 3005 to get instruction or task message from one or more contacts, contacts of contacts, advertisers, users of network based on preferences, rules and settings 3003, wherein preferences, rules and settings comprises selected one or more contacts, connections of one or more social networks and applications, groups, networks, any users of network or public, keywords and categories specific users including users from particular location(s), named school or college or type or named interacted or related one or more entities, all or preferences specific verified users, all or preferences specific advertisers including advertisers who provides offers including cash, gifts, redeemable points, vouchers, cashbacks, discount, samples, digital goods, virtual objects, virtual money or points, pre-set scheduled date and time or pre-set duration for receiving instruction or task message, receiving instruction or task message from mutually connected users or invitation accepted users only.
  • GUI Graphical User Interface
  • User can view score 3001 and share score 3002. User can accumulate or collect or aggregate score as much as possible for ranking, showcase in ladder.
  • instruction or task messages e.g.1501, 1504, 1505, 1507, 1509
  • server based on preferences including interests types, categories, keywords, hashtags, named or type of or pre-set locations and places, defined geofenced boundaries, defined radius surround user’s current location or particular locations or places, date & times and one or more types of filters, conditions, SQL (Structured Query Language) and criteria, rules including receiving instruction or task message as per pre-set scheduled date and time or pre-set duration and settings 3004 of user and/or advertisements or suggested by server or picked by server administrators and/or one or more types of user data, wherein user data comprises current or past locations, checked-in places, current date and time and associated events, schedules, place associated information, date associated events or festivals, accompanied contacts and associated user data, logged various types of information about current and past activities, actions, events, transactions, interacted entities, calendar entries, user contacts and connection and associated user data, one or more types of profile including general profile comprises age, gender, income range, interests or hobbies, current and past work
  • enabling user to select“Singles” option 3007 wherein user can accept invitation 3018 of contacts or other users of network and send invitation 3020 to contacts or other users of network and in the event of acceptance of invitation enable both mutually connected users to send each other instruction or task messages based on one or more types of mutually decided rules and settings and receive associated or related status notification regarding searching, founding, identification of instruction specific real world objects, scanning of said identified real world object(s) and receiving of associated virtual objects / virtual money, wherein rules and settings 3015 comprises finish game within pre-set duration 3011, daily allowed pre-set number of instruction or task messages sending 3012 and get message associated task done within pre-set duration 3013 and then user can save and execute or apply rules and settings 3014. So user can play with each mutually connected user with mutually decided rules and settings with each mutually connected user. User can view score 3008 and share score 3009.
  • user can select Team option 3025 and create and manage one or more teams.
  • user can tap or click on“create team” button 3027 and provide team name e.g.“Super NYC“ 3028 and invite other team administrator 3030 (e.g.“Team“Photo King” (Admin: Adriana) Accepted invitation”) or accept invitation of other team administrator 3032 with the intention to challenge said team.
  • team or each participated team can invite team members by selecting one or more contacts, groups or other searched or matched users of network (e.g. 3040, 3042, 3044 and 3045) from list of contacts, groups and other users of network 3046 and sending invitation to each team members 3050.
  • network e.g. 3040, 3042, 3044 and 3045
  • Team administrators can mutually decide rules and settings 3052 to be applied to both teams, wherein rules and settings comprises finish game within pre-set duration 3070, daily allowed pre-set number of instruction or task messages sending by each team to other team 3071 and get message associated task done within pre-set duration 3072, number of players in each team 3073, number of matches 3075, allowed number of times request or instruction or task message changes 3077, allowed number of times retry 3078.
  • rules and settings comprises finish game within pre-set duration 3070, daily allowed pre-set number of instruction or task messages sending by each team to other team 3071 and get message associated task done within pre-set duration 3072, number of players in each team 3073, number of matches 3075, allowed number of times request or instruction or task message changes 3077, allowed number of times retry 3078.
  • rules and settings comprises finish game within pre-set duration 3070, daily allowed pre-set number of instruction or task messages sending by each team to other team 3071 and get message associated task done within pre-set duration 3072, number of players in each team 3073
  • administrator of team can remove one or more team members from team 3082.
  • user can view and accept invitation of particular team administrator 3081 with the intention to participate in said team.
  • any team member can remove themselves from particular team 3083.
  • team members and based on mutual permission, contacts or other users of network can view scores 3053.
  • team administrator or as per mutual decision team administrators can view or publish or share team information, associated updated scores, status, statistics, analytics, logged activities 3060 to feeds, timelines, and one or more contacts, groups, networks, external social networks, websites, applications, web services, and one or more types of digital destinations. In an embodiment if tie between two teams then pre-set number of (e.g. 5) more attempts for each team.
  • pre-set particular number of points or virtual money for acceptance of instruction.
  • pre-set particular number of points or virtual money for finishing particular number of the task of the assign task.
  • assign more than one task by sending a message 2709.
  • user can view detail profile of contact user e.g.“Lily”2725 before sending message 2709 to said contact, so user can properly customize or prepare instruction or task 2709.
  • Server module 160 receives and stores invitations, accepted or rejected invitations, information about mutually connected players, information about created teams and associated participated members, scores, rules, preferences, settings, updates, logged activities, actins, events, transaction, status, statistics, and updated scores.
  • Figure 31 illustrates example Graphical User Interface (GUI) 280 for enabling user to pre-set allowing to automatically publish or share or send or update scores 805 and automatically publish or share or send or update each or selected instruction or task message(s) and instruction or task message associated one or more types of content or media including one or more photos or videos, associated updated scores, number and types of views, number and types of reactions, logged one or more types of activities, actions, reactions including number of views, likes, types of emoticons, comments, shares 3107 to all 3111 or selected one or more contacts 3112 or public 3113 or None or do not publish or send to anybody or send only to message sender 3114, followers 3115, enabling to publish to said content to one or more social networks, search engines, websites, webpages, applications, web services, one or more selected users on external website’s timeline or feed or profile, send via one or more communication channels, mediums and modes including email, phone, VOIP, SMS, Instant messenger, tweets and posts 3118, save locally or at particular storage 3116, and pre-set view
  • enabling user to set status including online, offline, busy, not available and provide availability or non- availability one or more schedules and apply“Do Not Disturb” settings, rules and policies including allow to send instruction or task message only to selected contacts or users, availability or non- availability one or more schedules, ON or OFF“Do Not Disturb”
  • notification settings may include play selected named and type of notification tones and vibration type while receiving of instruction or task message, while receiving of instruction or task message associated status, while receiving instruction specific virtual objects by identifying and scanning real world object by instruction receiver, while sending of instruction or task message specific or related status, type of reactions 3134.
  • Server module 160 receives stores and applies said one or more types of privacy settings related to each user of network.
  • enabling user to search and match one or more contacts or users of network view profile of searched or matched or suggested users of network and send invitations or accept invitation of other contacts or other users of network 3136.
  • user is configured to provide one or more types of payment information 3142.
  • team members and administrator can collaboratively 330 (“Collaborative / Team”) prepare instruction or task message 309 and can send to one or more target recipients or teams or selected member of selected team.
  • administrator of team can assign received message associated task including search, locate, identify and scan ort take photo of real world object.
  • enabling administrator of team or member of team to accept or reject message associated instructed task.
  • score may calculated based on how fast search, identify and scan instruction or task specific real world object(s.
  • instructed task finish then enabling to ask other task (task assigner user must have to provide task within particular duration e.g. within 10 minutes) and determine winner uses or team based on who finish more tasks within particular duration (e.g. 1 week).
  • enabling user to search one or more team based on name, rank, score range, location, category, profile field and invite them and challenge them.
  • Figure 32 illustrates example Graphical User Interface (GUI) 290 for enabling user 305 or server administrator 303 to search particular location or place on map based on search query or one or more keywords 3211 or select particular place or location 3213 on map 3212 or visit place 3213 and get or identify Global Positioning System (GPS) coordinates, longitude, latitude, altitude one or more location points (e.g.
  • GUI Graphical User Interface
  • a geolocation of the mobile device 200 may be used to detect a geolocation of the mobile device 200.
  • a WiFi sensor or Bluetooth sensor or Beacons including iBeacons or other accurate indoor or outdoor location determination and identification technologies can be used to determine the geolocation of the mobile device 200.
  • user 3205 or server 3203 administrator can capture or record or select and provide one or more photos or videos related to said pre-defmed real world geofence, provide or select and associate one or more types or categories or sub-categories or taxonomy, provided physical address related to said pre-defmed real world geofence, so users or players of network can physically reach or visit said pre-defme geofence place or location or use map directions & route and step by step or guided directions to physically reach at said pre-defmed real world geofence, provide details or description or structured details of said pre-defmed real world geofence, select or add one or more new fields and provide each added field specific one or more types of values or data or one or more types of contents or media.
  • exemplary user of network 3205 limits users of network or exemplary user 3205 to provide one or more types of information.
  • exemplary user of network 3205 is enable to save said information as draft for later editing or submission 3288 or edit already drafted or cancel or discard or remove said provided or drafted information or submit to server module 183 of server 110 for verification, validation, moderation, editing, providing or applying other settings and making available for all or authorized users of network.
  • geo-fence boundary 3215 or draw on map (e.g. 3212) geo-fence boundary (e.g. 3215) surround real world particular place (e.g. garden, Zoo, park, museum, public place, place of interest, tourist place, forest, mountain, pond, river, sea, island, land, mall, sky, public places, art gallery, museum, home, building and like).
  • real world particular place e.g. garden, Zoo, park, museum, public place, place of interest, tourist place, forest, mountain, pond, river, sea, island, land, mall, sky, public places, art gallery, museum, home, building and like.
  • enabling user of network 3205 to define or set whether virtual object(s) and associated virtual money display or show or present anywhere within predefined geo-fence boundary 3215 of real world, so when user or player enters in to said defines geo-fence boundary 3215, then without scanning any real world object within said geofence boundary, displaying or presenting virtual objects 3266 and/or associated virtual money 3264 to said user or player who entered or dwell in said pre-defmed geo-fence of real world and enable said user or player to get said displayed one or more virtual objects 3266 and/or associated virtual money 3264 or get said virtual objects 3266 and/or associated virtual money 3264 by conducting one or more pre-set activities, actions or call-to actions 3270 or based on deciphering of pre-defmed clue 3270.
  • events, products, exhibitions, and shows install application 395 of said pre-defmed geo-fence 3215 of the real world and associated one or more places, event, products, need to take one or more types of reactions including like, dislike, provide one or more types of emoticons 3297 on said pre-defmed geo-fence 3215 of the real world and associated one or more places, arts, event, products, exhibition, type of tickets and shows, need to view particular duration of presentation of said pre-defmed geo-fence 3215 of the real world and associated one or more places, arts, event, products, exhibition, type of tickets and shows 3929, follow 3221 said pre-defmed geo- fence 3215 of the real world and associated one or more places, related personas, arts, event, products, exhibition, and shows, need to add to favorite or contact list 3222 said pre-defmed geo- fence 3215 of the real world and associated one or more places, arts, event, products, exhibition, type of tickets and shows or conduct one or more types of actions as per defined rules 3276 or to
  • user of network who entered into or stayed or dwell for particular pre-set duration in said pre-defmed geo-fence 3215 of the real world and associated one or more places, event, exhibition, and shows will automatically get associated virtual object 3266 and/or associated virtual money 3264.
  • user of network who entered into or stayed or dwell for particular pre-set duration in said pre-defmed geo-fence 3215 of the real world and associated one or more places, event, exhibition, and shows will need to play said set or selected mini game to get or collect or capture associated virtual object 3266 and/or associated virtual money 3264.
  • EXIF Exchangeable image file format
  • user can suggests one or more geo-fence boundaries related to the real world and provides associated details which server module 183 receives and verifies and makes available for other users of network (discuss in detail in figures 35-36).
  • commercial geo-fence boundary mall, shop, restaurant, hotel, theatre, commercial complex, offices, company, building, place
  • advertiser or sponsor wants to publish virtual objects related to real world object including product, service, food item, place of business, board name, showcase display item or product, art, sculpture, design, person in-shop and logo or brand or name
  • advertiser or sponsor can make payment 3289 and in an embodiment in the event of user of network, user need to submit said provided information to server module 183 for processing, moderation, verification, validation, applying needed settings and after successfully validation and verification making them available for other users of network.
  • server module 183 received said information from user and enables sever administrator 3203 to review said information and after successfully reviewing, moderation, verification, validation, applying needed settings, server administrator mark said information as verified information 3258.
  • enabling server administrator 3203 or in another embodiment enabling user of network 3205 to preview said information, test applied settings, virtual objects, geo-fence boundary, schedule, and actions 3281, enabling to save as draft or edit already exists or saved information 3282, save current information 3283 or cancel or discard or remove provided information 3284.
  • server module 183 or server administrator In an embodiment enabling server module 183 or server administrator to suggest or provide or display number of points or amount of virtual money for user selection based on object type, location or place, associated type of actions, paid or sponsored or free, type of user who provided information, schedules or duration of publication, geo-fence boundary.
  • server admin can apply or set one or more types of required actions to collect or get one or more virtual objects 3265 from said pre-defmed geo-fence boundary 3215.
  • enabling or authorizing server administrator 3203 or in another embodiment enabling user of network 3205 to adds information about one or more pre-defmed geo-fence boundaries 3215 of real world provide one or more types of settings, preferences, clue, tips, virtual objects, schedules, required actions for users or players of network to collect said virtual objects when user enters into or stays for pre-set duration in said pre-defmed geo-fence boundary 3215.
  • advertiser or sponsor can provides associated target criteria 3207 including add, include or exclude or filter one or more languages 3208, schedule of showing of
  • advertisement or virtual object including start date 3285, end date 3286 and showing
  • target user s profile type or characteristics or modeling of target users including any users of network or target criteria specific users of network including one or more types of one or more profile fields including gender, age or age range, education, qualification, home or work locations, related entities including organization or school or college or company name(s) and Boolean operators and any combination thereof 3207.
  • advertiser or sponsor or user can allow to access said virtual objects related to pre-defmed one or more geo-fence boundaries of the real world to one or more identified users, contacts, followers, past and/or present customers, prospective customers, particular defined type of users of network 3207 or all users of network who entered into or stays for pre-set duration within/in/at said pre-defmed one or more geo-fence boundaries of the real world.
  • advertiser or sponsor or user can start or pause or turn ON or Off 3287 availability of or publication of virtual objects associated with said pre-defmed one or more geo- fence boundaries of the real world for authorized users who entered into or stays for pre-set duration within/in/at said pre-defmed one or more geo-fence boundaries of the real world.
  • server module 183 receives information related to pre-defmed geo-fence boundary including associated virtual objects, settings, preferences, publishing or disabling or target criteria, user actions and rules.
  • server module 183 receives said information and stores to server database 115 and verifies photos, videos, name, address, details, location information including place, geo-fence boundary, virtual objects, associated virtual money, text, data and metadata, applied settings, schedules, one or more required actions.
  • server administrator makes available said information on map or other one or more types of Graphical User Interface (GUIs) for users of network (Discuss in detail in Figures 35-36.
  • GUIs Graphical User Interface
  • Figures 33-34 illustrates user interface(s) 290 for in an embodiment enabling sponsor or advertiser or publisher user 3305 to create account including provide user and entity details 3301 (name, age, gender & other profile information, entity name & address, email, contact information), login information (e.g. user identity or email address, password), billing information & payment information (if paid) or free for general user, authorized publisher and server admin.
  • entity details 3301 name, age, gender & other profile information, entity name & address, email, contact information
  • login information e.g. user identity or email address, password
  • billing information & payment information if paid or free for general user, authorized publisher and server admin.
  • server or system verifies sponsor or advertiser or publisher or user’s account(s) or type of account and associated roles, rules, privacy, rights & privileges and policies and active user’s account to enable account holder to create and manage one or more advertisement campaigns, advertisement groups, advertisements and associate virtual objects, details about pre-defmed geo-fence boundary 3344 draw or defined on map 3345 including photos or videos, target criteria, schedules, associate user actions, geo- fence, preferences and other settings.
  • campaign or publication comprises a set of advertisement groups (virtual objects 3366, virtual objects associated geo-fence boundary 3344 or draw or defined geo-fence boundary 3344 associated virtual objects 3366, details of geo-fence boundary, advertisements, user actions) that share a budget 3315, advertisement model type 3316, location targeting 3318, type of user profile or defined characteristics of user for targeting users of network 3347, schedules of targeting 3310 / 3311 / 3331, languages targeting 3318, device(s) type(s) targeting
  • campaign types and other settings let advertiser control where and when their advertisements (publications of virtual objects) appear and how much they want to spend and campaigns are often used to organize categories of products or services or brands that advertiser offer, Advertiser enable to provide campaign or publication name 3304, provide icon or logo or image 3307, provide details 3306, search 3342 or select 3346 location or place of business on map or directly provide or input or select location or place of business including
  • GPS Global Positioning System
  • geo-fence boundary 3344 surround one or more locations or place of business 3251 or searched or selected one or more locations or place or types of locations or places, add photo or video associated or related with of geo-fence boundary 3344 or object or product or service or brand (for example capture or record or select and add photo or video of food item or menu item), provide object name, provide object type or category, provide address of place of business or object or place where object(s) is/are physically available for purchase or view, Global
  • Positioning System coordinates, longitude, latitude describing accurate or exact location of place of business, price of object, one or more types of additional structured details by creating, selecting and adding one or more fields and providing each added field specific details or one or more types of data or media or content, object details, provide, select 3367, import, search 3369, purchase 3368, design 3343, edit, update, upgrade, add 3367 and upload one or more types of one or more virtual objects or virtual elements or virtual characters 3366 and provide or select and associate custom or user defined number of or particular amount of or value of virtual money or virtual currency or points or numbers 3345 or use pre-set or pre- defined or pre- associated by server, number of or particular amount of or value of virtual money or virtual currency or points or numbers 3345 for particular category or type of real world geo- fence boundary.
  • GPS Positioning System
  • enabling user of network 3305 to define or set whether virtual object(s) and associated virtual money display or show or present anywhere within predefined geo-fence boundary 3315 of real world, so when user or player enters in to said defines geo-fence boundary 3315, then without scanning any real world object within said geofence boundary, displaying or presenting virtual objects 3366 and/or associated virtual money 3364 to said user or player who entered or dwell in said pre-defmed geo-fence of real world and enable said user or player to get said displayed one or more virtual objects 3366 and/or associated virtual money 3364 or get said virtual objects 3366 and/or associated virtual money 3364 by conducting one or more pre-set activities, actions or call-to actions 3370 or based on deciphering of pre-defmed clue 3370.
  • user of network who entered into or stayed or dwell for particular pre-set duration in said pre-defmed geo-fence 3215 of the real world and associated one or more shops, point of interests, event, show, and exhibition will need to play said set or selected mini game to get or collect or capture said pre-defmed geo-fence 3344 associated virtual object 3366 and/or associated virtual money 3345.
  • EXIF Exchangeable image file format
  • geo-fence boundary 3345 surround real world place of business 3251.
  • Advertiser can provide budget for particular duration including daily maximum spending budget of advertisement 3315, daily budget is the amount that advertiser set for each campaign to indicate how much, on average, advertiser’s willing to spend per day, advertisement model including pay per getting or selectin or capturing or wining virtual object by users or customers or visitors of said pre-defmed geo-fence boundary 3344 related to business place for getting associated virtual objects, wherein virtual objects associated with geo-fence boundary 3344 defined and provided by provided by sponsor or advertiser or user 3305.
  • advertiser or sponsor or user 3305 can search and select one or more target real world objects 3401 each associated with particular location or place or one or more types of target real world objects 3402 scattered at different locations or search and select one or more movable target real world objects 3403 (e.g. elephant at Yellowstone national park) or natural scene each associated with particular location or one or more types of movable target real world objects 3404 (e.g. animal) or natural scene scattered at different locations or search and select one or more geo-fence boundaries 3422 or search and select one or more types of geo-fence boundaries 3423 for displaying virtual objects (e.g. 3366) related to advertisement when users enters into or dwells for pre-set duration in said pre-defmed geo-fence boundary 3344.
  • movable target real world objects 3403 e.g. elephant at Yellowstone national park
  • movable target real world objects 3404 e.g. animal
  • geo-fence boundaries 3423 e.g. 3366
  • Advertiser can provides associated target criteria including add, include or exclude or filter one or more languages 3318, schedule of showing of advertisement including start date 3310, end date 3311 and showing advertisements all the time or particular time, time range at particular date or day 3331, select targeted device type(s) 3319 including mobile devices, personal computer, wearable device, tablets, android device and/or iOS devices etc., define target user’s profile type or characteristics or modeling of target users including any users of network or target criteria specific users of network including one or more types of one or more profile fields including gender, age or age range, education, qualification, home or work locations, related entities including organization or school or college or company name(s) and Boolean operators and any combination thereof 3347.
  • user or publisher or advertiser can save campaign 3483 at server database 115 of server 100 via server module 183 and/or local storage medium of user device 200, so user can access, update, start 3485, pause 3486, stop or remove or cancel 3484, view and manage 3490 one or more created campaigns and associate information and settings including one or more advertisement groups 3492, and advertisements 3482 and can access started one or more campaigns, advertisement groups and advertisement associated or generated analytics and statistics 3493.
  • user 3305 can make order and payment 3497 and submit said provided information to server module 183 for processing, moderation, verification, validation, applying needed settings and after successfully validation and verification making them available for other users of network.
  • server module 183 received said information from user and enables sever administrator to review said information and after successfully reviewing, moderation, verification, validation, applying needed settings, server administrator mark said information as verified information.
  • server module 183 receives said information and stores to server database 115 and verifies said pre-defmed geo-fence boundary 3344 associated one or more photos, videos, name, object address, details, location including place, geo-fence boundary, associated virtual money, text, data and metadata, applied settings, schedules, one or more required actions.
  • server administrator makes available said information on map or other one or more types of Graphical
  • GUIs User Interfaces for users of network (Discuss in detail in Figures 35-36).
  • advertiser or sponsor or user 3305 can create new 3488 or save 3494 or manage 3490 one or more advertisement campaigns and can add new advertisement group 3491 or manage existing advertisement groups 3492.
  • advertiser or sponsor or user 3305 can create new advertisement (publish or displaying virtual object to users when user or player or customer or prospective customer visits place of advertiser and enter into advertiser defined one or more geo-fence boundaries.
  • advertiser or sponsor or user 3305 can save or update 3483 or remove 3484 or manage 3482 created or drafted or published or started advertisement(s) or publication of virtual objects.
  • advertiser or sponsor or user 3305 can starts 3485 or pause 3486 already verified advertisements.
  • advertiser or sponsor or user 3305 can schedule publishing of advertisement or virtual objects 3487.
  • advertiser or sponsor or user 3305 can view advertisement campaign, advertisement groups and advertisements related statistics and analytics including number of user viewed details about said pre-defmed geo-fence boundary 3344, number of users, number of users gets said pre-defmed geo-fence boundary 3344 associated virtual objects.
  • advertiser or sponsor or user can start or pause or turn ON or Off 3499 availability of or publication of virtual objects associated with said pre-defmed one or more geo- fence boundaries of the real world for authorized users who entered into or stays for pre-set duration within/in/at said pre-defmed one or more geo-fence boundaries of the real world.
  • Geofences are used to define virtual perimeters.
  • the system can control whether object crossed geofence border (either“in” or“out”). All these events are logged, so user can obtain geofence reports and receive alerts.
  • Route - allows creating a virtual perimeter between two (or more) points 3571 / 3573 / 3572.
  • certain locations or area on map 3345 not allowed for defining geo-fence boundary on map or not displaying virtual objects within said identified area (e.g.
  • roads - 3571 / 3573 / 3572 including roads, waters, risky areas for user’s safety while visiting places of geo-fence boundary and collecting or getting associated virtual objects. It is used in order to ensure that user doesn’t require entering or dwelling into risky area like route of roads, and if it does, an alarm can be set to inform the interested party.
  • Circle - move a circle with a mouse, having pressed the center of the circle.
  • Polygon - initially has a form of the correct pentagon, which can be easily and randomly changed.
  • Route - user need to choose start and end points, the system will automatically build a route between them. If user wants to add more points on the route, drag the route with the mouse. Next, determine the size of the vicinity.
  • Geofence created by user can be edited or deleted.
  • To mark a location of interest user specifies its latitude and longitude.
  • To adjust the proximity for the location user adds a radius.
  • the latitude, longitude, and radius define a geofence, creating a circular area, or fence, around the location of interest.
  • For each geofence user can ask Location Services to send user entrance and exit events, or user can specify duration within the geofence area to wait, or dwell, before triggering an event.
  • system automatically identifies risky or hazardous roads and physical structure or infrastructure and prevents displaying of virtual objects on/surround/near/at said types of places or area or locations.
  • system can use both Beacons & Geofencing campaigns using geofences and beacons either separately or in conjunction with one another.
  • Figure 35 illustrates user interface 290 wherein user can view maps of real or physical world and can view associated pre-defmed objects related to particular places or locations (discussed in detail in figures 3-6 e.g. public place object or particular product related particular business place) and in the event of click on photo or video or icon or name or place name of pre-defmed object, displaying associated information (discussed in detail in figures 8-9). Map also shows pre-defmed geo-fence boundaries (discussed in detail in figures 32-34) and in the event of click on particular pre-defmed geo-fence boundary, displaying associated information.
  • user 3501 can view pre-defmed information about said pre-defmed geofence boundary 3535 and can tap on direction icon 3550 or step by step guided direction icon 3548 to view route 3533 and direction or step by step guided direction to reach at pre-defmed geofence boundary 3535.
  • direction icon 3550 or step by step guided direction icon 3548 to view route 3533 and direction or step by step guided direction to reach at pre-defmed geofence boundary 3535.
  • user is notify about entering into said pre-defmed geofence boundary 3535 by server module 184 and in the event of clicking or tapping on notification or opening application, in an embodiment displaying 3D virtual world map interface 3610 ( Figure 36) / 291 to said user 3501, so it will saves user device’s battery power consumption.
  • the game having virtual world geography that correspondences the real world geography, as a result, as the player or virtual avatar of player 3601 continuously moves about or navigates in a range of coordinates in the real world. The player also continuously moves about in a range of coordinates in the real world digital map or virtual world.
  • Server module 184 accessing pre-defmed geo-fence 3535 in real world and associated virtual objects 3621 / 3622 /3623 /3624 and responsive to the client device 200 of user 3601 being within a pre-defmed boundary of geofence 3535 in the real world or within predefined radius surround particular place 3535 in the real world or set distance of the location of particular place in the real world, display or provide, by the server module 184, one or more virtual objects 3621
  • server module 184 displays one or more types of virtual objects at particular pre-defmed geo-fence boundary based on user surround type of location or place (e.g. garden or zoo) and associated keywords (e.g. if garden then flower, tree, bird and if zoo then animal) or type of real world objects (e.g.
  • user surround type of location or place e.g. garden or zoo
  • associated keywords e.g. if garden then flower, tree, bird and if zoo then animal
  • type of real world objects e.g.
  • geofence or place associated information including weather, current date & time, environment (rain, fog, snow, cold, heat, sunrise, sunset) or current information about place provided by users of network, user preferences, type of subscriptions (paid, free, sponsored), current level, accompanied contacts or users of network, number of users of network, advertised virtual objects, number of virtual objects allowed to display for all or particular user, number of virtual objects currently remaining for all or particular user within user’s current location associated geo-fence, area of geo-fence, current total number of users entered and stay within said geo-fence boundary, user profile data including age, gender, interest, income range, home and work location (local or tourist or foreigner), education, skills, position, virtual objects associated rules and required actions, authorized to access or display or view virtual objects or not, schedules of availability of virtual objects, number of steps walk within said geofence boundary, duration of stay within said geofence boundary, currently collected or win or get or acquire number of virtual objects and amount of virtual money within said geofence boundary
  • the game having a virtual world geography that correspondences the real world geography, as a result, as the player continuously moves about or navigates in a range of coordinates in the real world, the player or avatar of player 3601 also continuously moves about in a range of coordinates in the real world map or virtual world.
  • displaying particular type of virtual animal running within virtual world of user which is correspond to real world geography, so user or player also need to running fast in real world to capture said running animal in virtual world.
  • user or player may need to conduct one or more types of activities, actions, reactions, participations, transactions, follow rules, and play mini games to capture said movable virtual objects. For example provide virtual feed item to virtual bird or animal, so speed of virtual bird or animal may decreases, so user may catch virtual bird by throwing virtual net or catching animal by employing cage.
  • user 3501 in the event of click or tap on pre-defmed geofence boundary 3545 by user 3501, can view pre-defmed information about said pre-defmed geofence boundary 3545 and can tap on direction icon 3550 or step by step guided direction icon 3548 to view route 3531 and direction or step by step guided direction to reach at pre-defmed geofence boundary 3545.
  • notifying user 3501 about entering into said pre-defmed geofence boundary 3545 by server module 184 and in the event of clicking or tapping on notification or opening application in an embodiment displaying a 2D virtual world map interface 3590 (291) to said user 3501, so it will saves user device’s battery power consumption.
  • the game having virtual world geography 3590 that correspondences the real world geography, as a result, as the player or virtual avatar of player 3501 continuously moves about or navigates in a range of coordinates in the real world. The player also continuously moves about in a range of coordinates in the real world digital map or virtual world.
  • sever module 184 In the event of entering into said pre-defmed geofence boundary 3545 or after staying or dwelling into said pre-defmed geofence boundary 3545 for pre-set duration, displaying, by the server module 184, one or more types of virtual objects 3560
  • the game having a virtual world geography 3590 that correspondences the real world geography, as a result, as the player or user 3501 continuously moves about or navigates in a range of coordinates in the real world, the player 3501 also continuously moves about in a range of coordinates in the real world map or virtual world 3590.
  • Server module 184 accessing pre-defmed geo-fence 3545 in real world and associated virtual objects 3560 / 3562 and responsive to the client device 200 of user
  • 3501 being within a pre-defmed boundary of geofence 3545 in the real world or within predefined radius surround particular place 3545 in the real world or set distance of the location of particular place in the real world, display or provide, by the server module 184, one or more virtual objects 3560 / 3562 and associated data including virtual money to the client device 200 and automatically storing, by the server module 184, user surround one or more virtual objects
  • the game data stored in the game database 115 may comprises data associated with the virtual world in the location-based game including imagery data used to render the virtual world on a display device, geographic coordinates of locations in the virtual world, etc.
  • Google MAP (TM) API for games may employ
  • data associated with players of the location- based game including player profile, level, virtual objects, virtual money, current player positions in the virtual world/real world, player power, settings, privacy settings, preferences, team information, data associated with game, data associated with virtual objects in the virtual world (e.g.
  • the game data stored in the game database 115 can be populated.
  • providing a game comprising: hosting, at a game server 110, a game, the game having a virtual world geography 3590 / 3610 / 3701 / 3730 that correspondences the real world geography, as a result, as the player 3501 / 3601 / 3705 / 3725 continuously moves about or navigates in a range of coordinates in the real world, the player also continuously moves about in a range of coordinates in the real world map (in combination with or having virtual objects) or virtual world 3590 / 3610 / 3701 / 3730; accessing pre-defmed geo-fence 3545 in real world and associated virtual objects (e.g.
  • 3560 / 3562 / 3710 / 3715 / 3720 / 3725 responsive to the client device 200 being within a pre-defmed boundary of geofence 3545 in the real world, display or provide, by the game server 110, one or more types of one or more virtual objects and associated data including virtual money (e.g. 3560 / 3562 / 3710 / 3715 / 3720 / 3725) to the client device 200; store 115, by the game server 110, one or more virtual objects and associated data including virtual money (e.g. 3560 / 3562 / 3710 / 3715 / 3720 / 3725) provided to the client device 200 in a virtual object and virtual money collection (discuss in detail in figures 22-26) associated with the client device 200.
  • virtual money e.g. 3560 / 3562 / 3710 / 3715 / 3720 / 3725
  • providing a game comprising: hosting, at a game server 110, a game, the game having a virtual world geography 3590 / 3610 / 3701 / 3730 that correspondences the real world geography, as a result, as the player 3501 / 3601 / 3705 / 3725 continuously moves about or navigates in a range of coordinates in the real world, the player also continuously moves about in a range of coordinates in the real world map (in combination with or having virtual objects) or virtual world 3590 / 3610 / 3701 / 3730; receiving, by the game server 110, a plurality of requests from a plurality of sponsors, each of the plurality of requests requesting that a virtual object or virtual element 1208 associated with the request be included at a location 3522 / 3564 in the virtual world 3590 / 3610 corresponding to a location 3522 / 3564 of a business (e.g.
  • “Blue Note” 3564) in the real world the business associated with a sponsor that provided the request to the game server 110 from the plurality of sponsors; selecting, by the game server, at least one request from the plurality of requests; responsive to selecting the at least one request, modifying, by the game server, game data to include the virtual element associated with the at least one request in the game at the location 3522 / 3564 in the virtual world 3590 / 3610 3590 / 3610 requested by the at least one request; providing, by the game server 110, the modified gate data to a client device 200 of a player 3501 / 3601 / 3705 / 3725; and responsive to the client device 200 being within a predefined geofence boundary 3545 or a set distance of the location of the business 3522 / 3564 in the real world of the at least one request, in an embodiment user can view information associated pre-defmed real world object 3551 / 3553 / 3611 / 3751 / 3753 about said business location 3522 / 3564 or geofence boundary 3545
  • virtual object use in virtual world.
  • virtual object comprises a one or more types of power in game, virtual item, virtual element, virtual reward, virtual money, virtual currency or other suitable virtual goods including geo-filter.
  • the virtual object is provided to the user or the player in response to the client device being within a pre-defmed geofence boundary or within a pre-defmed geofence boundary surrounds the location of the business of sponsor.
  • the virtual object is provided to the user or the player in response to the client device being within a pre-defmed geofence boundary or within a pre-defmed geofence boundary surround the location of the business of sponsor and the player or the user making a purchase of an object or product or service at the business of the sponsor.
  • the virtual object is redeemable in the real world.
  • the virtual object is a coupon, a redeemable point, a gift, a sample, an offer, cash back, discount, or voucher redeemable in the real world.
  • Figure 37 illustrate displaying 3D map of real world outdoor 3701 and indoor 3730 view for enabling real world player’s virtual avatar 3705 / 3725 to visits or enter into pre-defme geofence boundary of particular place (e.g. shop or restaurant or place of business) 3702, wherein 3D real world map 3701 / 3730 also contains and display virtual objects 3710 / 3715 / 3720 (provided or define by sponsor or users of network or server administrator) based on one or more types of user associated and user related data and matching one or more types of user data including current or nearby location, user profile with sponsor’s criteria and rules including user or player 3705 / 3725 requires to play one or more types of mini game or purchase products.
  • place e.g. shop or restaurant or place of business
  • Figures 37 illustrate virtual world geography that corresponds to real world geography and which displays virtual objects 3710 / 3715 / 3720 which may use in virtual world and/or may redeemable in real world.
  • Real world player can select, get, win, capture, acquire, claim, add to user’s collection of virtual objects and play mini game to capture said displayed one or more types of one or more virtual obj ects, virtual money and virtual rewards including voucher, redeemable points, coupons, offers, gift, sample, cash backs, and discount 3710 / 3715 / 3720 which may redeemable in real world.
  • Figure 38 illustrates examples, in an embodiment identifying particular accurate points of location of real world product (e.g. 3881 / 3882), item, object, art, food item, painting 3811 based on beacon or iBeacon 3810 ( or e.g. 3808 / 3809) and precise and fast location detection technology like UWB RTLS (Ultra-wideband) 3810.
  • the iBeacon technology being used widely with smartphone apps that deliver location specific content automatically to the user indoors or outdoors. Beacons are great as most devices have a bluetooth receiver and can communicate with beacons. Beacons can facilitate only one-way communication. That is, beacons can only transmit signals to user or player’s 3812 smartphones 200 and receive signals from them. Beacons 3810 (e.g. 3808 / 3809) signals can only trigger an action if a mobile app
  • beacon 3810 e.g. 3808 / 3809
  • 3801 reacts to the signal. For example - if deploy 3 beacons (e.g. 3810) at 3 different sections
  • the beacon 3810 e.g. 3808 that is closest to player 3820 (the distance is approximated based on the signal strength of the beacon) will push a notification to the user’s 3812 smartphone 200, and based on that server module 184 displays said identified beacon 3810 (or e.g. 3808 / 3809) associated identified real world object
  • Smartphone apps 3830 / 3801 can detect the beacon 3810 (e.g. 3808 / 3809) signal and estimate the proximity of the beacon 3810 (e.g. 3808 /
  • location specific content e.g. virtual object 3822 (e.g. 3802 / 3807)
  • location specific content e.g. virtual object 3822 (e.g. 3802 / 3807)
  • UWB RTLS Ultra-wideband
  • an art museum with relevant or associated virtual object for each painting, using beacons for displaying the virtual object when the user or player stand right in front of the painting (e.g. within 1 meter or more based on accuracy of beacon).
  • beacons for displaying the virtual object when the user or player stand right in front of the painting (e.g. within 1 meter or more based on accuracy of beacon).
  • the game having a virtual world geography 3830 that correspondences the real world geography 3825, as a result, as the player or virtual avatar 3820 of real world player 3812 continuously moves about or navigates in a range of coordinates in the real world (e.g. in museum).
  • the real world 3812 player also continuously moves about in a range of coordinates in the real world digital map or virtual world 3830.
  • Figure 39 illustrates, user interface for selecting, updating, applying one or more types of settings, preferences, privacy settings including application’s home related settings 3905 like set home screen of application as augmented reality scanning application or feature 3901, map interface (displaying real world place or location specific real world objects associated details
  • Feed or timeline 3904 viewing shared or received one or more types of contents including photos, videos, scores, status, updates, posts, messages and like
  • notifications 3903 view various types of notifications containing suggested or nearby places or real world objects for notifying or alerting user to scan said one or more real world object or scene and get associated or displayed one or more types of virtual objects, and instructions from connected users, server administrators or automatically from sever, sponsors and users of network.
  • User may select, apply, set and update notification 3920 related settings including receive notifications from all or selected friends or contacts 3910, all or selected team 3911,
  • notification type may comprise receive notification when user is near to particular location or place or real world object or particular pre-defmed geofence boundary 3921, suggested real world objects or particular pre-defined geofence boundaries or locations or places of real world objects 3922, receive notification when user receives instruction from contacts or team members or server
  • user may set, update and apply“Do Not Disturb” policies wherein user can turn ON or Off receiving of notifications or receive only selected type specific notifications or notifications receive only from selected contacts, apply vibration or ring tone type, schedule receiving of notifications 2938.
  • User may select, set, update and apply map settings 3950 including automatically add and show interacted entity on map 3951, show real world objects and geofence boundaries 3952, show map of objects in 3D or 2D mode 3953.
  • Digital Spectacle Settings 3960 including notify about nearby real world objects or scenes or real-time updated scenes 3961, auto scan when user views real world objects 3962, notify when enter into pre-any defined geofence 3963, auto scan when user views virtual objects within geofence 3964
  • Augmented Reality Recognition Settings 3980 including need to tap on Augmented Reality Button 3981, View in camera to auto start augmented reality scanning 3982.
  • User may select, set, update and apply other settings including auto capture photo or record video while scanning real world objects and display to user for review and sharing 3983 and Auto log scanning of real world objects and associated required actions, transactions, reactions related information and display to user for review and sharing 3984
  • Figure 40 illustrates example Graphical User Interface (GUI) for home 4005 of client application installed in user device 200 which connected with server 110 and sever database 115 which enables user to access various types of features including access various types of notifications (discuss in detail in Figures 3-7 15 and 3920) 4031 and currently received notifications 4006 from particular source 4007, define and submit for verification real world objects (discuss in detail in Figures 3-7) 4053, search, view details of real world objects or places of real world objects and details of associated real world objects on map 4036 (by default shows current location surround real world objects and places of real world objects (discuss in detail in Figures 8(B) and 9(B)), view nearby places related real world objects 4049, conduct augmented reality scanning by viewing 4025 in camera 4060 real world object 4025 or clicking or tapping on camera application associated“Augmented Reality Scanning” button or icon 4038 to get, win, collect and capture displayed or associated virtual objects 4022 (discuss in detail in Figures 8(A) and 9(A)) and 10-14),
  • Figures 27-31 view logs of real world as well as digital activities 4052 (discuss in detail in
  • Figures 19-21 take photo 4041 or record video 4042 or live streaming of video 4043 for submitting type of activity specific one or more photos or videos and in the event of successful verification by server, receive associated virtual objects, virtual money and virtual rewards
  • User can also view player or user name or nick name 4001, pre-defmed custom avatar of user or player 4002, user’s current levels 4003, number of points or amount of value of virtual objects and virtual money 4004 received by user or player 4001. User can view detail profile of user (discuss in detail in Figure 41).
  • user’s device will vibrate to alert user when one or more objects in real world and associated one or more virtual objects is/are nearby. If user doesn’t see any objects associated virtual objects nearby, user can take a walk to find or identify objects in pre-defmed geofence boundary in real world and associated virtual objects or find or identify and scan and tray to get associated virtual objects. User can guess places and objects and try to scan guessed objects in real world to capture or get associated virtual objects by employing augmented reality scanning. If user found uncommon animals (e.g. elephant), birds (e.g. Peacock) then user can use augmented reality scanning to capture or get associated virtual objects. In the case of local specific common animals and birds, users will not able to get virtual objects. Based on user’s home address, in foreign country common animals and birds are treat as special for user for pre- set number of days, so user will able to scan and get preset number of virtual objects (per type).
  • identified and pre-defmed objects in real world and associated virtual objects located in user’s area are displayed in the Nearby objects in real world and associated virtual objects section in the bottom right of the screen in Map View.
  • Virtual objects already in user’s collections will show in different color.
  • user encounter virtual objects use may have pre- set duration to scan and capture certain types of virtual objects and/or due to limited number of availability of certain types of virtual objects within particular duration at particular place, user needs to scan and capture virtual objects before non-availability of virtual objects or user need to reach before another user reach and scans and capture virtual objects.
  • per type of animal, bird etc. per day one time scanning allowed to get them. If scanning by group members or in group then more points or virtual money value may provide to each group user.
  • Figure 41 illustrates example Graphical User Interface (GUI) for enabling user to view user’s updated details including current particular number of level 4112, associate number of points 4111, indicator 4110.
  • GUI Graphical User Interface
  • User interface may also shows user’s virtual customized avatar 4105 which display in virtual world or virtual world having corresponding real world geography.
  • User interface may also show user’s name, nick name 4101, detail profile link 4101 to view various types of user related data.
  • server module 188 calculates scores based on health related activity type specific points or virtual money or virtual objects associated value and display scores to user on indictor 4121.
  • sever module 188 calculates scores based on monitored and tracked social types of activities and display scores to user on indictor 4122.
  • sever module 188 calculates scores based on monitored and tracked said entertainment types of activities and display scores to user on indictor 4123.
  • server module 188 Based on number of connections in social network, visiting places or geofence boundary with one or more contacts, participating with events with contacts, server module 188, calculates scores and display scores to user on indictor 4124.
  • server module 188 calculates scores and display scores to user on indictor 4125.
  • server module 188 calculates scores and display scores to user on indictor 4126, wherein travel related activities includes visiting of particular types and number of places, point of interests within particular duration and/or associated money spending, number of miles walking during travel, conducted type of activities including elephant or camel or horse riding, balloon, water sports, visiting of museum, gardens, zoo, beaches, rivers, sanctuary, forts, forests, mountains, tracking, art gallery, amusement park, water park.
  • travel related activities includes visiting of particular types and number of places, point of interests within particular duration and/or associated money spending, number of miles walking during travel, conducted type of activities including elephant or camel or horse riding, balloon, water sports, visiting of museum, gardens, zoo, beaches, rivers, sanctuary, forts, forests, mountains, tracking, art gallery, amusement park, water park.
  • server module 188 calculates scores and display scores to user on indictor 4127.
  • server module 188 calculates scores and display scores to user on indictor 4131.
  • server module 188 calculates scores and display scores to user on indictor.
  • server module 188 calculates scores and display scores to user on indictor 4136.
  • the server module 1188 updates various types or categories of activity level and display to user on associated indicator.
  • user 4101 can view and manage users’ one or more types of profile 4171 including general, social, professional, business, health and one or more types of activities profile.
  • user 4101 can view user’s collection of virtual objects, virtual money and virtual rewards 4172.
  • user 4101 can view and manage users’ logged real world and digital activities, actions, participations, transactions 4173.
  • user 4101 can view and manage user related statistics and analytics including number of places or geofence boundaries visited, number of miles walking, number of and types of and amount of virtual objects and virtual money received by user, types of and number of activities did by user and transactions conducted by user, score of played games, number of instruction received, number of instruction send, number of notifications received from server, sponsors, contacts of user and users of network, number of photos and/or videos submitted for verifying various types of activities related to user and receiving virtual objects, virtual money and virtual rewards, number of, names of and types of interacted, transacted, connected, followed and related to user, number of and types of digital activities, actions, transactions, status and level of user and associated points or amount of virtual money acquired, collected, win, purchase, capture and received by user, number of photos, videos and posts shared by user.
  • FIG. 42 illustrates example Feed Graphical User Interface (GUI) 281 wherein server module 161 generates feed item or news feed or feed post based on shared or posted or published or send photo or video of real world objects or photo or video of user or one or more contacts of user with real world objects, task or instruction messages associated received one or more types of virtual objects, virtual money, virtual power, virtual goods and associated contents or media including photo or video, associated metadata, reactions, and scores and serves to authorized or related or requested viewing users or feed or timeline, wherein feed items or feed posts comprises user’s own 4205 and other users’ including connected users, current location or checked in place specific, logged preferences specific users’ 4225 shared, published and posted one or more types of contents or media task or instruction messages associated received one or more types of virtual objects, virtual money, virtual power, virtual goods and associated contents or media including photo or video, associated metadata, reactions, scores, analytics and statistics.
  • GUI Feed Graphical User Interface
  • user can provide one or more types of reactions including like, provide comment on and take one or more types of actions including share displayed posts or one or more types of contents or media including task or instruction messages associated received one or more types of virtual objects, virtual money, virtual power, virtual goods and associated contents or media including photo or video.
  • user can view scores, statistics including total number of tasks accepted and total number of accepted tasks fulfilled, total number of likes, comments, shares, views, points and scores 4251, total number of tasks assigned, total number of assigned tasks specific received media and associated, likes, comments, shares, views, points and scores 4252.
  • user can view from post detail scores 4255.
  • posting user can mention accessible user names and associated link, who help user in conducting of said assigned tasks.
  • enabling user to search and match 4202 feed items or feed posts or one or more type of contents published or share or send by user’s contacts, participated or administrated team members, groups, networks, followed sources and users of network who shared or published posts or one or more type of contents or media including photos, videos associated task message, metadata, scores, reactions, statistics.
  • sort 4261 and filter 4260 displayed feed items or feed posts or one or more type of contents published or shares or send to user by user’s contacts, groups, participated or administrated team members, networks, followed sources and users of network who shared or published posts or one or more type of contents or media including photos, videos associated task message, metadata, scores, reactions, statistics, wherein sort or filter types comprises sort or filter as per received date & time or ranges of received date
  • one or more sources or contact names or team names or group names or followed source one or more locations or places or define radius surround current or particular location, highest to lowest views, scores and type of reactions, highest posts from sources, keywords found in instruction or tasks messages, keywords matched with recognized objects in photo or video (series of images) related keywords, keywords matched with metadata associated with posts or displayed one or more types of contents or media, integrated media with task message or part of message or image, ephemeral content only, type of media or content including photo or video, user’s own published or posted or sent contents or media or posts and any combination thereof.
  • user can view and manage user’s contacts and connections and search, select and invite users of network for connecting with them or accept received invitation of other users of network for connecting with them.
  • Figure 43 illustrates example Graphical User interface (GUI) showing story on map interface 282.
  • GUI Graphical User interface
  • monitoring, tracking, recording of story instruction from user 4307 wherein information about each visited route or place comprises monitoring, tracking, recording, receiving, processing, logging and storing each visited routes and place or location or location co-ordinates associated Global Positioning System (GPS) coordinates, longitude, latitude, altitude, date and time and information related to associated with visited location or place or
  • GPS Global Positioning System
  • GPS Global Positioning System
  • user provided contents and data including one or more photos, videos, live video, structured contents (structured contents may provide via fields and associated one or more types of values, data and contents or forms or dynamically presented forms), voice, images, links or web addresses, text, animations, 3D contents, multimedia, emoticons, stickers, emoji, place information provided by user and/or suggest by server and add or edited by user.
  • structured contents may provide via fields and associated one or more types of values, data and contents or forms or dynamically presented forms
  • voice images, links or web addresses
  • text animations
  • 3D contents multimedia, emoticons, stickers, emoji, place information provided by user and/or suggest by server and add or edited by user.
  • GPS Positioning System
  • server module 190 mark and show 4389 starting point of location or Global Positioning
  • GPS Global System
  • server module 190 notifies user information about current place 4313 to user 4307 (e.g.“Take photo or video or selfie with outside of place”.
  • user can view said received notification and can tap on notification and capture one or more photos or record videos and add 4391 or associated said captured photo or recorded video 4302 and one or more types of content including text or comments to/with said place 4313.
  • server can suggest one or more types of information to user based on user profile, user related one or more types of data, place information including place name, place details, place related information shared by other users of network, data or information including menu item, product, service, movie name and details
  • server module 190 monitors, tracks, logs routes 4332 from starting place
  • movable icon or image or sticker or emoji or emoticon with user’ s avatar or virtual character or realistic character (2D or 3D) on map that correspondences the real world geography, as a result, as the user 4307 continuously moves about or navigates in a range of coordinates in the real world based on monitoring and tracking current location of the client device 200, the user 4307 also continuously moves about in a range of coordinates in the real world map 4340 or virtual world 4340.
  • user can capture one or more photos
  • GPS Global Positioning System
  • Positioning System co-ordinates 4308 or visited place 4313 or current place 4301.
  • user can provide title to story on/with/overlay on map or provide one or more types of details 4362 and structured data by using forms or provided field specific values or data.
  • user can provide text 4363 or voice 4377 or video 4384 commentary with or without music any time with any location point 4313 or route 4332 or Global Positioning System
  • server module 190 dynamically determines and generates and presents to user form or structured form for enabling user to provide structured details or data or one or more types of contents 4364, where dynamically determines based on type of currently visited place, user profile or one or more types of user related or associated data, speed of user’s movement based on speed change or updates of monitored or tracked user device’s current location, provided structured details, provided or associated one or more types of contents with location point 4313 or route 4332 or
  • GPS Global Positioning System
  • server module 190 determines product or service based on submitted purchase receipt, linking with vendor or seller’s system or database), accompanied users or persons or contacts and associated type of relationships, current date and time and associated event information, place associated details and one or more types of data or contents and reactions, type and details of user’s activity, participations in particular event, conducting of particular transaction and status (provided by user e.g. eating particular food item, viewing particular movie or show or television program, reading particular book, listening particular song).
  • user can use map directions 4350 and step by step direction guide 4348 to visit from current place
  • server module 190 send various types of notifications to user including notify user about view information or suggested contents related to or about current place or route or nearby or user surround places or entered geofence associated places, persons, products, brands, and services, notify user about point of interests for suggesting user to take photos or record videos or provide one or more types of contents or structured data or details.
  • server module 190 identifies user accompanied one or more users or contact’s identity and associated details based on matching monitored or tracked current location of user device with current location of user device of other users of network or contacts.
  • server module 190 In an embodiment enable user to conduct augmented reality scanning or taking of one or more photos or videos and in the event of receiving of said photo or video or detection of conducting of augmented reality scanning by server module 190, server module 190 recognizes objects in received photo or video or raw photo or scanned data and identifies and stores associated keywords with location or place or location co-ordinates where user conducted and submitted augmented scanning or taking a photo or video. In an embodiment user is enable to record voice and edit with particular location point 4313 or route 4332 or Global Positioning
  • GPS Global System
  • server module 190 After receiving above discussed plurality types of data, server module 190 generates visual story or story on/in/with/overlay on map, wherein generated map comprise displaying user’s avatar or realistic character or virtual character at first place 4313 and displaying or automatically playing or automatically showing associated one or more types of contents including photos and videos in sequences or story format and then displaying that user’s avatar or realistic character or virtual character moving from first place 4313 to next place or second place 4323 route 4332 based on pre-defmed settings including display movement speed and then displaying or automatically playing or automatically showing or presenting second place 4323 associated one or more types of contents including photos and videos 4305 in sequences or story format.
  • generated map comprise displaying user’s avatar or realistic character or virtual character at first place 4313 and displaying or automatically playing or automatically showing associated one or more types of contents including photos and videos in sequences or story format and then displaying that user’s avatar or realistic character or virtual character moving from first place 4313 to next place or second place 4323 route 4332 based on pre-defmed settings
  • server modules displays said generated story with map to user and enabling user to edit story including add, edit, remove one or more types of contents including photos, videos, text, web addresses or links, music, voice, images, emoji, comments, reactions, provide one or more types of settings including speed 4303 of movement of user’s avatar or realistic or virtual character, provided additional structured details, select all 4553 or one or more contacts 4554, followers
  • user is enabled to view 4340, play, pause, stop, go-to start, go-to end, rewind, forward, fast forward 4374 and can directly click on particular place 4313 or content item 4302 (e.g. phot or video or text) and view, play from or jump to start, from particular point or duration or location or place 4374 to preview and update one or more types of contents including routes and directions 4332, one or more photos, videos, live video, structured contents (structured contents may provide via fields and associated one or more types of values, data and contents or forms or dynamically presented forms), voice, images, links or web addresses, text, animations, 3D contents, multimedia, emoticons, stickers, emoji, place.
  • particular place 4313 or content item 4302 e.g. phot or video or text
  • structured contents may provide via fields and associated one or more types of values, data and contents or forms or dynamically presented forms
  • voice, images, links or web addresses text, animations, 3D contents, multimedia, emoticons, stickers, emoji, place
  • user or admin user can view, access, edit and updates combined stories of one or more accompanied users.
  • user can provide pause duration at appropriate places in said story on map or story with map.
  • user can allow receiving or viewing user(s) 4352 of said story on map or story with map 4340 to view said story on map or story with map in 2D format 4340 or 3D format - in the event of clicking or tapping on this option, displaying to user 3D virtual world 4505 (outside of place view) or 4506 (inside place view when user enter) or 4530 (inside place activity (e.g. eating of [Pizza]) view in the form of captured or recorded or associated or added or shared photos or videos) or 4601 (inside place activity (e.g.
  • 4343 or location co-ordinates 4308 or selected location point 4308 on map 4340 enable user to turn on 4376 or share live story on map or story with map with all 4353 or one or more selected contacts 4354, followers 4356, type of users of network including selected one or more types or presences specific or one or more criteria and boolean operator specific users of network (for example gender And age range And particular named school) or
  • SQL Structured Query Language
  • user can select 4309 / 4314 / 4315 / 4316 / 4367 and include or exclude one or more visited places 4313 or sub-places 4366, pre-defmed geofence boundaries 4323 /
  • user can user use realistic avatar or virtual avatar or as per privacy settings use both 4370 while recording, so user can share story with user’s realistic avatar with one or more selected contacts or user can share story with virtual avatar with followers or types of criteria specific users of network.
  • user can instruct server module 190 or set or apply settings instructing to generate visited places related activities in 3D or multi- dimensional animated graphics at appropriate places or instruct to use only user generated, or user added or provided or associated place specific contents or instruct to use both options 4371.
  • Figure 45 (A) illustrates example 3D real world map or 3D virtual world having geography correspond to real world geography or 3D Graphical User interface (GUI), wherein hosting, at a server, a 3D virtual world geography 4505 or 3D real world map interface 4505 that correspondences the real world geography, as a result, as the user 4501 continuously moves about or navigates in a range of coordinates in the real world based on monitoring and tracking current location of the client device 200, the user 4501 also continuously moves about in a range of coordinates in the 3D real world map 4505 or 3D virtual world user interface 4505 and server module 190 records said movement or location co-ordinates and based on received above discussed data, generates 3D visual realistic story (displaying real world user’s activities in virtual world which is sufficiently similar to real world).
  • GUI Graphical User interface
  • server module 190 records when user reaches from the first place to the second place by walking or running, or by one or more types of means of transportation including cab or taxi, rickshaw, car, bike, scooter or scotty, bus, train, boat, flight, cruise, horse cab, cycle.
  • server module 190 Based on means of transportation and realistic 3D animated avatar, server module 190 generates and displays realistic animation or simulation (e.g. 3D game having realistic characters moving in virtual world) wherein user’s realistic 3D animated avatar is moving or transporting in particular type of means of transportation from the first place to the second place on the 3D real world map
  • realistic animation or simulation e.g. 3D game having realistic characters moving in virtual world
  • server module 190 may employ e.g. Google Map API for games including 360 street view).
  • server module 190 starts recording of user’s real world movement from [Sheridan Square
  • Server module 190 identifies starting point based on user instruction or marking of particular location or place as starting point and user device’s current location.
  • Server module 190 identifies each visited place based on staying or dwelling of user for particular duration at particular identified place or point of interest or pre-defmed geofence boundary (wherein identified place does not considered when user is walking, running, travelling by one or more types of one or more means of transportation or vehicles, location co-ordinates having no identified place).
  • Server module 190 records transportation of user from the starting point of location to the first identified place via particular identified route, movement from outside of the first identified place 4505, enter into the first identified place 4506, conduct one or more types of activities in/at/inside the first identified place 4506 then server module 190 records transportation of user from the first identified place to the second identified place via particular identified route, movement from outside of the second identified place, enter into the second identified place, conduct one or more types of activities (including eating particular food, viewing movie, via shop, visit shop, purchase particular product, itinerary, walking inside mall, visiting art gallery, visiting and viewing museum, reading book, shopping, wandering at beach, visit tourist place) in/at second identified place then transportation of user (via walking, running, cab, bus, train, flight, cycle) from the second identified place to the third identified place via particular identified route and like.
  • Server module 193 records story in various pre-defmed parts.
  • server module 190 records from the first place to the second place or from starting point to the first place then outside the first place in response to the client device being within a set distance of the first place, then record one or more types of activities in the first place, then record transportation of user from the first place to the second place, wherein recording of each part is/are based on received one or more types of data (discussed in figure 43). For example after reaching at [Blue Note] 4322 via route 4331, in response to the client device being within a set distance of the [Blue Note] 4322, recording, by the server module 190, that user 4301 is walking and reaching at [Blue Note] 4322 and display on virtual world with a geography that parallels at least a portion of the geography of the real world such that a user
  • the server module 190 can navigate the virtual world 4505 by moving to different geographic locations in the real world and then when user 4301 enters into [Blue Note] 4322 then display inside 360 degree 3D view and then recording, by the server module 190, one or more types of user conducted activities, actions, participations in events, conducting of one or more types of transactions. For example recording by the server module 190, ordering of particular menu item [e.g. Pizza], eating [Pizza], make payment for [Pizza], submitting of receipt via scanning or taking of photo of purchase receipt of [Pizza], taking of photograph or recording of video (e.g. one or more selfie, with food, with restaurant and like) by user 4301 and providing one or more types of reactions by user 4301 including liking of [Pizza], providing of review and ratings.
  • menu item e.g. Pizza
  • eating [Pizza] make payment for [Pizza]
  • taking of photograph or recording of video e.g. one or more
  • Figure 44 illustrates example 2D real world map or 2D virtual world having geography corresponding to real world geography or 2D Graphical User interface (GUI) displaying of one or more types of map story or story on map or story with map 4440 related to or shared by one or more contacts or users of network 4307 with user or viewing user 4407 for enabling user 4407 to view, play, pause, stop, go-to start place 4489, go-to end place 4488, rewind (go back), forward (go to next visited place or route or location or location co-ordinates or pre-defmed geofence,) , make slow playing of story or fast forward 4474 story on map or story with map 4440, view as per user 4407 device’s current location, enable to directly click on particular place 4489 or route 4431 or pre-defmed geofence 4444 or content item 4411 (e.g.
  • GUI Graphical User interface
  • phot or video or text and view, play from or jump to particular place or location or route or geofence or start from particular point or duration of story, provide one or more types of reactions to one or more places, routes, associated one or more types of consents including photo, video, posts, message, voice or cometary or comments, wherein reactions comprises like 4476, dislike 4476, provide one or more types of emoticons or stickers or expressions or emoji 4475, provide comments 4477, provide ratings, live or real-time chat 4477 with sharing user or other one or more viewers of said story on map or story with map 4440and share with one or more selected contacts and/or users or network and/or one or more types of destinations including one or more external websites and applications, view routes and directions 4450 and step by step guided directions 4448, view place associated or shared one or more types of contents including one or more photos, videos, live video, structured contents (structured contents may provide via fields and associated one or more types of values, data and contents or forms or dynamically presented forms), voice, images, links or web addresses, text,
  • user can view combined stories of one or more selected users who shared stories related to similar places.
  • display or start or automatically start play said generated or presented story 4440 wherein present on a display indicia of a set of content items available for viewing; present on the display a first content item 4460 of the set of content items for a first view period of time 4461 / 4358 defined by a timer 4461, wherein the first content item 4460 is hide when the first view period of time expires 4461 / 4358; receive from a touch controller a haptic contact signal indicative of a gesture applied to the display 4460 during the first view period of time 446; wherein the content presentation controller hides the first content item 4460 in response to the haptic contact signal and proceeds to present on the display a second content item of the set of content items for a second view period of time defined by the timer, wherein the content item controller hides the second content item upon the expiration of the second view period of
  • user can turn ON or OFF 4402 live or real-time updated view of updated story on map or updated story with map 4440.
  • user can view story in 2D format 4440 or 3D format (discuss in detail in figure 45).
  • user can view combined view similar types of (related to particular place) story of one or more users of network 4495 or view suggested information related to particular place 4495.
  • user can view full or selective part of story by selecting 4409 / 4414 / 4415 / 4416 / 4467 and including or excluding one or more visited places 4413 or sub-places 4466, pre-defmed geofence boundaries 4423 /
  • FIG. 45 illustrates example 3D real world map or 3D virtual world having geography correspond to real world geography or 3D Graphical User interface (GUI) wherein hosting, at a server, a 3D virtual world geography 4505 or 3D real world map interface 4505 that
  • server module 190 In an embodiment in the event of playing story, displaying in 3D format or in animation or simulation format that user is reaching in 3D virtual world from the first place to the second place by walking or running, or by using or employing one or more types of means of transportation including cab or taxi, rickshaw, car, bike, scooter or scotty, bus, train, boat, flight, cruise, horse cab, cycle.
  • server module 190 Based on means of transportation and realistic 3D animated avatar, server module 190 generates and displays realistic animation or simulation wherein user’s realistic 3D animated avatar is moving or transporting in particular type of means of transportation from the first place to the second place on the 3D real world map (e.g. server module 190 may employ Google Map API for games including street view). For example when user 4301 starts walking from [Sheridan Square Garden] 4323 to [Blue Note]
  • server module 190 displaying in virtual world 4501, user’s real world movement from [Sheridan Square Garden] 4323 to [Blue Note] 4322 via route 4331 in 3D format which viewing user can view as per speed settings.
  • Server module 190 identifies starting point based on user instruction or marking of particular location or place as starting point and user device’s current location then Server module 190 identifies each visited place based on stay of user at particular identified place or point of interest or pre-defmed geofence boundary for particular duration (wherein identified place not include when user is walking, running, travelling by one or more types of one or more means of transportation or vehicles, location co-ordinates having no identified place). Server module 190, displays in virtual world said recorded transportation of user from the starting point of location to the first identified place via particular identified route, movement from outside of the first identified place 4505, enter into the first identified place 4506, conduct one or more types of activities (e.g.
  • first identified place 4525 in the form of presenting one or more photos or videos shared by user and/or other users of network and/or related to said place or in 3D format or 3D animated format based on identified or determined types of conducted activities by user.
  • server module 190 displays by the server module 190, transportation of user from the first identified place to the second identified place via particular identified route, movement from outside of the second identified place, entering into the second identified place, conducting one or more types of activities (including eating particular food, viewing movie, via shop, visit shop, purchase particular product, itinerary, walking inside mall, visiting art gallery, visiting and viewing museum, reading book, shopping, wandering at beach, visit tourist place) in/at second identified place then transportation of user (via walking, running, cab, bus, train, flight, cycle) from the second identified place to the third identified place via particular identified route and like.
  • Server module 193 displays recorded story in various pre- defined parts.
  • [Pizza] based on submitted scanned or photo of receipt, shared one or more photographs or recorded of videos (e.g. one or more selfie, with food, with restaurant and like), provided one or more types of reactions including liking of [Pizza], providing of review and ratings, server module 190 and based on monitored and tracked user device’s location and 360-degree imagery with 3D Street View technology and user’s realistic 3D animated and movable avatar, generates and displaying in virtual world 4601, said generated 3D animation or 3D simulation showing that user [Yogesh] 4605 and particular identified accompanied user [Amita] 4610 both enters into, walks into, ordering, serving by waiter, eating [Pizza], talking regarding pizza, making of payment with or without voice and/or editing and/or text and/or associate one or more types of overlay information inside particular place [Blue Note] in 3D format or 3D animation format 401 which sufficiently look and feels like viewing in realistic or real world or in video. In an embodiment viewing user can view virtual tour inside said particular place. Likewise based on type
  • 3D animation or 3D simulation showing that user or one or more accompanied users playing, walking, running, viewing, seating, discussing, travelling, wandering in mall, viewing products in shop, purchasing particular one or more products, providing one or more types of expressions liking particular product(s), talking about particular product(s), reading, listening, asking, enquiring, questioning, reviewing details, swimming, providing various types of expressions including smiling, crying, joy, trust, anticipation, disgust, sadness, happiness, fear, anger, surprise, travelling via one or more means of transportation (car, cab, taxi, bus, train, flight, boat, cruise, cycle, walking, running), drinking (e.g.
  • filters or criteria comprises income range of user, level of money spending including minimum, medium, luxury, super luxury, gender, users who traveled with single or with identified family members or friends or contacts, stories started and end at particular place or location, home location, language, one or more types of profile fields, one or more types of activities conducted, habit of food, visited place(s), age ranges, date & time ranges, one or more keywords, categories, types, criteria, filters and any combination thereof.
  • computer system 1000 may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop, notebook, or notebook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device.
  • a personal computer system desktop computer, laptop, notebook, or notebook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device.
  • computer system 1000 includes one or more processors 1010 coupled to a system memory 1020 via an input/output (I/O) interface 1030.
  • Computer system 1000 further includes a network interface 1040 coupled to I/O interface 1030, and one or more input/output devices 1050, such as cursor control device 1060, keyboard 1070, multitouch device 1090, and display(s) 1080.
  • I/O input/output
  • embodiments may be implemented using a single instance of computer system 1000, while in other embodiments multiple such systems, or multiple nodes making up computer system 1000, may be configured to host different portions or instances of embodiments.
  • some elements may be implemented via one or more nodes of computer system 1000 that are distinct from those nodes implementing other elements.
  • computer system 1000 may be a uniprocessor system including one processor 1010, or a multiprocessor system including several processors 1010 (e.g., two, four, eight, or another suitable number).
  • processors 1010 may be any suitable processor capable of executing instructions.
  • processors 1010 may be general- purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA.
  • ISAs instruction set architectures
  • each of processors 1010 may commonly, but not necessarily, implement the same ISA.
  • At least one processor 1010 may be a graphics processing unit.
  • a graphics processing unit or GPU may be considered a dedicated graphics-rendering device for a personal computer, workstation, game console or other computing or electronic device.
  • Modern GPUs may be very efficient at manipulating and displaying computer graphics, and their highly parallel structure may make them more effective than typical CPUs for a range of complex graphical algorithms.
  • a graphics processor may implement a number of graphics primitive operations in a way that makes executing them much faster than drawing directly to the screen with a host central processing unit (CPU).
  • the methods as illustrated and described in the accompanying description may be implemented by program instructions configured for execution on one of, or parallel execution on two or more of, such GPUs.
  • GPU(s) may implement one or more application programmer interfaces (APIs) that permit programmers to invoke the functionality of the GPU(s).
  • APIs application programmer interfaces
  • Suitable GPUs may be commercially available from vendors such as NVIDIA Corporation, ATI Technologies, and others.
  • System memory 1020 may be configured to store program instructions and/or data accessible by processor 1010.
  • system memory 1020 may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory.
  • SRAM static random access memory
  • SDRAM synchronous dynamic RAM
  • program instructions and data implementing desired functions are shown stored within system memory 1020 as program instructions 1025 and data storage 1035, respectively.
  • program instructions and/or data may be received, sent or stored upon different types of computer-accessible media or on similar media separate from system memory 1020 or computer system 1000.
  • a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or CD/DVD-ROM coupled to computer system 1000 via I/O interface 1030.
  • Program instructions and data stored via a computer-accessible medium may be transmitted by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 1040.
  • I/O interface 1030 may be configured to coordinate I/O traffic between processor 1010, system memory 1020, and any peripheral devices in the device, including network interface 1040 or other peripheral interfaces, such as input/output devices 1050.
  • I/O interface 1030 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 1020) into a format suitable for use by another component (e.g., processor 1010).
  • I/O interface 1030 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example.
  • PCI Peripheral Component Interconnect
  • USB Universal Serial Bus
  • I/O interface 1030 may be split into two or more separate components, such as a north bridge and a south bridge, for example.
  • some or all of the functionality of I/O interface 1030 such as an interface to system memory 1020, may be incorporated directly into processor 1010.
  • Network interface 1040 may be configured to allow data to be exchanged between computer system 1000 and other devices attached to a network, such as other computer systems, or between nodes of computer system 1000.
  • network interface 1040 may support communication via wired and/or wireless general data networks, such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fiber Channel SANs, or via any other suitable type of network and/or protocol.
  • general data networks such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fiber Channel SANs, or via any other suitable type of network and/or protocol.
  • Input/output devices 1050 may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer system 1000. Multiple input/output devices 1050 may be present in computer system 1000 or may be distributed on various nodes of computer system 1000. In some embodiments, similar input/output devices may be separate from computer system 1000 and may interact with one or more nodes of computer system 1000 through a wired and/or wireless connection, such as over network interface 1040.
  • memory 1020 may include program instructions 1025, configured to implement embodiments of methods as illustrated and described in the accompanying description, and data storage 1035, comprising various data accessible by program instructions 1025.
  • program instruction 1025 may include software elements of methods as illustrated and described in the accompanying description.
  • Data storage 1035 may include data that may be used in embodiments. In other embodiments, other or different software elements and/or data may be included.
  • computer system 1000 is merely illustrative and is not intended to limit the scope of methods as illustrated and described in the accompanying description.
  • the computer system and devices may include any combination of hardware or software that can perform the indicated functions, including computers, network devices, internet appliances, PDAs, wireless phones, pagers, etc.
  • Computer system 1000 may also be connected to other devices that are not illustrated, or instead may operate as a stand-alone system.
  • the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components.
  • the functionality of some of the illustrated components may not be provided and/or other additional functionality may be available.
  • instructions stored on a computer-accessible medium separate from computer system 1000 may be transmitted to computer system 1000 via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link.
  • Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Accordingly, the present invention may be practiced with other computer system configurations.
  • a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc ), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link.
  • storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc ), ROM, etc.
  • RAM e.g. SDRAM, DDR, RDRAM, SRAM, etc
  • ROM etc.
  • transmission media or signals such as electrical, electromagnetic, or digital signals
  • a program is written as a series of human understandable computer instructions that can be read by a compiler and linker, and translated into machine code so that a computer can understand and run it.
  • a program is a list of instructions written in a programming language that is used to control the behavior of a machine, often a computer (in this case it is known as a computer program).
  • a programming language's surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, there are some programming languages which are more graphical in nature, using visual relationships between symbols to specify a program.
  • the syntax of a computer language is the set of rules that defines the combinations of symbols that are considered to be a correctly structured document or fragment in that language. This applies both to programming languages, where the document represents source code, and markup languages, where the document represents data.
  • the syntax of a language defines its surface form. Text-based computer languages are based on sequences of characters, while visual programming languages are based on the spatial layout and connections between symbols (which may be textual or graphical or flowchart(s)). Documents that are syntactically invalid are said to have a syntax error. Syntax - the form - is contrasted with semantics - the meaning.
  • semantic processing In processing computer languages, semantic processing generally comes after syntactic processing, but in some cases semantic processing is necessary for complete syntactic analysis, and these are done together or concurrently.
  • the syntactic analysis comprises the frontend, while semantic analysis comprises the backend (and middle end, if this phase is distinguished).
  • semantic analysis comprises the backend (and middle end, if this phase is distinguished).
  • system may be implemented via a combination of hardware and software, as described, or entirely in hardware elements.
  • particular division of functionality between the various system components described herein is merely exemplary, and not mandatory; functions performed by a single system component may instead be performed by multiple components, and functions performed by multiple components may instead performed by a single component.
  • the foregoing embodiments have been described in the context of a social network website, it will apparent to one of ordinary skill in the art that the invention may be used with any social network service, even if it is not provided through a website.
  • Any system that provides social networking functionality can be used in accordance with the present invention even if it relies, for example, on e-mail, instant messaging or any other form of peer-to-peer communications, or any other technique for communicating between users.
  • Systems used to provide social networking functionality include a distributed computing system, client-side code modules or plug-ins, client-server architecture, a peer-to peer communication system or other systems. The invention is thus not limited to any particular type of communication system, network, protocol, format or application.
  • a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
  • Embodiments of the invention may also relate to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a tangible computer readable storage medium or any type of media suitable for storing electronic instructions, and coupled to a computer system bus.
  • any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
  • Embodiments of the invention may also relate to a computer data signal embodied in a carrier wave, where the computer data signal includes any embodiment of a computer program product or other data combination described herein.
  • the computer data signal is a product that is presented in a tangible medium or carrier wave and modulated or otherwise encoded in the carrier wave, which is tangible, and transmitted according to any suitable transmission method.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne des systèmes et des procédés de publication d'objets virtuels. L'invention concerne des systèmes et des procédés permettant d'accéder à des données d'objet virtuel associées à un objet du monde réel, des informations d'emplacement ou de périmètre virtuel et des critères d'objet, de générer un objet virtuel à l'aide des données d'objet virtuel; de stocker des données spécifiant une association entre l'objet virtuel associé à un objet du monde réel, des informations d'emplacement ou de périmètre virtuel et les critères d'objet; de détecter qu'un dispositif client du serveur a effectué un balayage ou un balayage à réalité augmentée (AR) ou a pris une photographie ou a fourni une photographie brute ou des données scannées à partir d'un emplacement d'objet du monde réel particulier; d'identifier ou de reconnaître un objet dans la photographie ou des données scannées; et sur la base du fait que l'objet identifié satisfait les critères d'objet associés à l'objet virtuel dans les données stockées, d'afficher ou de fournir l'objet virtuel et des données associées comprenant de l'argent virtuel au dispositif client. Dans un autre mode de réalisation, fournir un jeu ayant une géographie du monde virtuel qui correspond à la géographie du monde réel, en conséquence, au fur et à mesure que le joueur se déplace en continu autour ou navigue dans une plage de coordonnées dans le monde réel, le joueur se déplace également en continu dans une plage de coordonnées dans la carte du monde réel ou le monde virtuel et en réponse au fait que le dispositif client se trouve à l'intérieur d'une limite de périmètre virtuel prédéfinie ou d'une distance définie de l'emplacement de l'entreprise dans le monde réel; recevoir, par le serveur de jeu, un balayage à réalité augmentée ou des données scannées ou une photographie brute ou une photographie capturée, identifier ou reconnaître, par le serveur de jeu, un objet dans la photographie ou les données scannées et sur la base du fait que l'objet identifié satisfait les critères d'objet associés à l'objet virtuel dans les données stockées, afficher ou fournir, par le serveur de jeu, l'objet virtuel et des données associées comprenant de l'argent virtuel au dispositif client. Dans un autre mode de réalisation, héberger, au niveau d'un serveur de jeu, un jeu, le jeu ayant une géographie du monde virtuel qui correspond à la géographie du monde réel, en conséquence, au fur et à mesure que le joueur se déplace en continu autour ou navigue dans une plage de coordonnées dans le monde réel sur la base de la surveillance et du suivi de l'emplacement actuel du dispositif client, le joueur se déplace également en continu dans une plage de coordonnées dans la carte du monde réel ou le monde virtuel; accéder à un périmètre virtuel prédéfini dans le monde réel et à des objets virtuels associés; et en réponse au fait que le dispositif client se trouve à l'intérieur d'une limite prédéfinie de périmètre virtuel dans le monde réel ou entre dans le périmètre virtuel ou séjourner ou demeurer pendant une durée prédéfinie ou particulière à l'intérieur du périmètre virtuel, afficher ou fournir, par le serveur de jeu, un ou plusieurs types d'un ou de plusieurs objets virtuels et des données associées comprenant de l'argent virtuel au dispositif client ou dans la carte du monde réel ou le monde virtuel. Dans un mode de réalisation, des activités du monde réel dans un monde virtuel 2D ou 3D ou une interface de carte du monde réel ayant une géographie réelle correspondante sont affichées.
PCT/IB2018/056071 2018-07-27 2018-08-11 Balayage à réalité augmentée d'un objet du monde réel ou entrée dans un périmètre virtuel pour afficher des objets virtuels et affichage des activités du monde réel dans le monde virtuel ayant une géographie correspondante du monde réel Ceased WO2020021319A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US16/105,025 US20180350144A1 (en) 2018-07-27 2018-08-20 Generating, recording, simulating, displaying and sharing user related real world activities, actions, events, participations, transactions, status, experience, expressions, scenes, sharing, interactions with entities and associated plurality types of data in virtual world
US16/104,973 US11103773B2 (en) 2018-07-27 2018-08-20 Displaying virtual objects based on recognition of real world object and identification of real world object associated location or geofence
US16/104,980 US20180345129A1 (en) 2018-07-27 2018-08-20 Display virtual objects within predefined geofence or receiving of unique code from closest beacon

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
IBPCT/IB2018/055631 2018-07-27
IBPCT/IB2018/055631 2018-07-27
IBPCT/IB2018/055821 2018-08-02
IBPCT/IB2018/055821 2018-08-02

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
IBPCT/IB2018/055821 Continuation 2018-07-27 2018-08-02

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/104,973 Continuation US11103773B2 (en) 2018-07-27 2018-08-20 Displaying virtual objects based on recognition of real world object and identification of real world object associated location or geofence

Publications (1)

Publication Number Publication Date
WO2020021319A1 true WO2020021319A1 (fr) 2020-01-30

Family

ID=69181988

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2018/056071 Ceased WO2020021319A1 (fr) 2018-07-27 2018-08-11 Balayage à réalité augmentée d'un objet du monde réel ou entrée dans un périmètre virtuel pour afficher des objets virtuels et affichage des activités du monde réel dans le monde virtuel ayant une géographie correspondante du monde réel

Country Status (1)

Country Link
WO (1) WO2020021319A1 (fr)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111597445A (zh) * 2020-05-14 2020-08-28 北京百度网讯科技有限公司 信息推荐方法及装置
CN111773658A (zh) * 2020-07-03 2020-10-16 珠海金山网络游戏科技有限公司 一种基于计算机视觉库的游戏交互方法及装置
CN112215965A (zh) * 2020-09-30 2021-01-12 杭州灵伴科技有限公司 基于ar的场景导览方法、设备以及计算机可读存储介质
CN112330819A (zh) * 2020-11-04 2021-02-05 腾讯科技(深圳)有限公司 基于虚拟物品的交互方法、装置及存储介质
CN112565806A (zh) * 2020-12-02 2021-03-26 广州繁星互娱信息科技有限公司 虚拟礼物赠送方法、装置、计算机设备及介质
CN113313839A (zh) * 2021-05-27 2021-08-27 百度在线网络技术(北京)有限公司 信息显示方法、装置、设备、存储介质及程序产品
CN113411248A (zh) * 2021-05-07 2021-09-17 上海纽盾科技股份有限公司 等保测评中结合ar的数据可视化处理方法及系统
KR102313272B1 (ko) * 2021-05-25 2021-10-14 최인환 증강 현실 기반 실시간 음성 번역 서비스 제공 방법, 장치 및 시스템
CN113694520A (zh) * 2021-08-27 2021-11-26 上海米哈游璃月科技有限公司 道具效果的处理方法、装置、电子设备以及存储介质
CN113806644A (zh) * 2021-09-18 2021-12-17 英华达(上海)科技有限公司 消息处理、显示方法、装置、终端及存储介质
US11314943B2 (en) * 2018-10-05 2022-04-26 Capital One Services, Llc Typifying emotional indicators for digital messaging
US11361039B2 (en) * 2018-08-13 2022-06-14 International Business Machines Corporation Autodidactic phenological data collection and verification
CN114842545A (zh) * 2022-07-06 2022-08-02 南京熊猫电子股份有限公司 一种基于轮盘赌的车站降级人脸识别库配发方法
CN114866506A (zh) * 2022-04-08 2022-08-05 北京百度网讯科技有限公司 展示虚拟形象的方法、装置及电子设备
CN115080876A (zh) * 2022-07-04 2022-09-20 合众新能源汽车有限公司 一种服务推荐方法、终端及计算机存储介质
WO2023055297A3 (fr) * 2021-09-30 2023-05-11 Lemon Inc. Réseautage social basé sur des éléments d'actifs
WO2023055296A3 (fr) * 2021-09-30 2023-05-11 Lemon Inc. Réseautage social basé sur des éléments d'actifs
US20230388781A1 (en) * 2021-02-09 2023-11-30 Vivo Mobile Communication Co., Ltd. Privacy setting method and apparatus for uwb detection, and electronic device
US11881049B1 (en) 2022-06-30 2024-01-23 Mark Soltz Notification systems and methods for notifying users based on face match
JP2024017310A (ja) * 2022-07-27 2024-02-08 株式会社ジェーシービー プログラム、情報処理装置、および情報処理方法
CN117590766A (zh) * 2024-01-19 2024-02-23 青岛理工大学 通道入口导流栏杆角度调整的控制方法和装置
WO2024120083A1 (fr) * 2022-12-07 2024-06-13 Lien I Chi Steven Procédé de fourniture d'interaction et d'analyse de données d'une plateforme numérique pour un objet
CN118214676A (zh) * 2024-03-20 2024-06-18 北京航天万源科技有限公司 基于平行网络大模型数字化专家的网络故障管控方法
US12482194B2 (en) 2022-12-13 2025-11-25 International Business Machines Corporation Augmented reality visualization of an action on an identified object

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9669296B1 (en) * 2012-07-31 2017-06-06 Niantic, Inc. Linking real world activities with a parallel reality game
US9754355B2 (en) * 2015-01-09 2017-09-05 Snap Inc. Object recognition based photo filters
US10019774B2 (en) * 2012-08-20 2018-07-10 Tautachrome, Inc. Authentication and validation of smartphone imagery

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9669296B1 (en) * 2012-07-31 2017-06-06 Niantic, Inc. Linking real world activities with a parallel reality game
US10019774B2 (en) * 2012-08-20 2018-07-10 Tautachrome, Inc. Authentication and validation of smartphone imagery
US9754355B2 (en) * 2015-01-09 2017-09-05 Snap Inc. Object recognition based photo filters

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11361039B2 (en) * 2018-08-13 2022-06-14 International Business Machines Corporation Autodidactic phenological data collection and verification
US11314943B2 (en) * 2018-10-05 2022-04-26 Capital One Services, Llc Typifying emotional indicators for digital messaging
CN111597445A (zh) * 2020-05-14 2020-08-28 北京百度网讯科技有限公司 信息推荐方法及装置
CN111773658A (zh) * 2020-07-03 2020-10-16 珠海金山网络游戏科技有限公司 一种基于计算机视觉库的游戏交互方法及装置
CN111773658B (zh) * 2020-07-03 2024-02-23 珠海金山数字网络科技有限公司 一种基于计算机视觉库的游戏交互方法及装置
CN112215965A (zh) * 2020-09-30 2021-01-12 杭州灵伴科技有限公司 基于ar的场景导览方法、设备以及计算机可读存储介质
CN112215965B (zh) * 2020-09-30 2024-02-20 杭州灵伴科技有限公司 基于ar的场景导览方法、设备以及计算机可读存储介质
CN112330819A (zh) * 2020-11-04 2021-02-05 腾讯科技(深圳)有限公司 基于虚拟物品的交互方法、装置及存储介质
CN112330819B (zh) * 2020-11-04 2024-02-06 腾讯科技(深圳)有限公司 基于虚拟物品的交互方法、装置及存储介质
CN112565806A (zh) * 2020-12-02 2021-03-26 广州繁星互娱信息科技有限公司 虚拟礼物赠送方法、装置、计算机设备及介质
CN112565806B (zh) * 2020-12-02 2023-08-29 广州繁星互娱信息科技有限公司 虚拟礼物赠送方法、装置、计算机设备及介质
US20230388781A1 (en) * 2021-02-09 2023-11-30 Vivo Mobile Communication Co., Ltd. Privacy setting method and apparatus for uwb detection, and electronic device
CN113411248B (zh) * 2021-05-07 2024-03-05 上海纽盾科技股份有限公司 等保测评中结合ar的数据可视化处理方法及系统
CN113411248A (zh) * 2021-05-07 2021-09-17 上海纽盾科技股份有限公司 等保测评中结合ar的数据可视化处理方法及系统
KR102313272B1 (ko) * 2021-05-25 2021-10-14 최인환 증강 현실 기반 실시간 음성 번역 서비스 제공 방법, 장치 및 시스템
CN113313839A (zh) * 2021-05-27 2021-08-27 百度在线网络技术(北京)有限公司 信息显示方法、装置、设备、存储介质及程序产品
CN113694520A (zh) * 2021-08-27 2021-11-26 上海米哈游璃月科技有限公司 道具效果的处理方法、装置、电子设备以及存储介质
CN113806644A (zh) * 2021-09-18 2021-12-17 英华达(上海)科技有限公司 消息处理、显示方法、装置、终端及存储介质
US11763496B2 (en) 2021-09-30 2023-09-19 Lemon Inc. Social networking based on asset items
US12494056B2 (en) 2021-09-30 2025-12-09 Lemon Inc. Social networking based on asset items
WO2023055297A3 (fr) * 2021-09-30 2023-05-11 Lemon Inc. Réseautage social basé sur des éléments d'actifs
EP4397058A4 (fr) * 2021-09-30 2024-12-04 Lemon Inc. Réseautage social basé sur des éléments d'actifs
US12045912B2 (en) 2021-09-30 2024-07-23 Lemon Inc. Social networking based on collecting asset items
WO2023055296A3 (fr) * 2021-09-30 2023-05-11 Lemon Inc. Réseautage social basé sur des éléments d'actifs
CN114866506A (zh) * 2022-04-08 2022-08-05 北京百度网讯科技有限公司 展示虚拟形象的方法、装置及电子设备
US11972633B2 (en) 2022-06-30 2024-04-30 Mark Soltz Notification systems and methods for notifying users based on face match
US12154375B2 (en) 2022-06-30 2024-11-26 Mark Soltz Notification systems and methods for notifying users based on face match
US11881049B1 (en) 2022-06-30 2024-01-23 Mark Soltz Notification systems and methods for notifying users based on face match
CN115080876A (zh) * 2022-07-04 2022-09-20 合众新能源汽车有限公司 一种服务推荐方法、终端及计算机存储介质
CN114842545A (zh) * 2022-07-06 2022-08-02 南京熊猫电子股份有限公司 一种基于轮盘赌的车站降级人脸识别库配发方法
JP2024017310A (ja) * 2022-07-27 2024-02-08 株式会社ジェーシービー プログラム、情報処理装置、および情報処理方法
JP7731108B2 (ja) 2022-07-27 2025-08-29 株式会社ジェーシービー プログラム、情報処理装置、および情報処理方法
WO2024120083A1 (fr) * 2022-12-07 2024-06-13 Lien I Chi Steven Procédé de fourniture d'interaction et d'analyse de données d'une plateforme numérique pour un objet
US12482194B2 (en) 2022-12-13 2025-11-25 International Business Machines Corporation Augmented reality visualization of an action on an identified object
CN117590766A (zh) * 2024-01-19 2024-02-23 青岛理工大学 通道入口导流栏杆角度调整的控制方法和装置
CN117590766B (zh) * 2024-01-19 2024-05-28 青岛理工大学 通道入口导流栏杆角度调整的控制方法和装置
CN118214676A (zh) * 2024-03-20 2024-06-18 北京航天万源科技有限公司 基于平行网络大模型数字化专家的网络故障管控方法

Similar Documents

Publication Publication Date Title
US11103773B2 (en) Displaying virtual objects based on recognition of real world object and identification of real world object associated location or geofence
WO2020021319A1 (fr) Balayage à réalité augmentée d'un objet du monde réel ou entrée dans un périmètre virtuel pour afficher des objets virtuels et affichage des activités du monde réel dans le monde virtuel ayant une géographie correspondante du monde réel
US20220150232A1 (en) Systems and methods for verifying attributes of users of online systems
Gerritsen et al. Events as a strategic marketing tool
US20200294097A1 (en) Seamless Integration of Augmented, Alternate, Virtual, and/or Mixed Realities with Physical Realities for Enhancement of Web, Mobile and/or Other Digital Experiences
Yarrow Decoding the new consumer mind: how and why we shop and buy
Ryan Understanding social media: how to create a plan for your business that works
Schroeder Startup rising: The entrepreneurial revolution remaking the Middle East
US12192426B2 (en) Device and system for recording and reading augmented reality content
Tanner What Stays in Vegas: The World of Personal DataÑLifeblood of Big BusinessÑand the End of Privacy as We Know It
Ferreira Location based transmedia storytelling: Enhancing the tourism experience
Margolis et al. Guerrilla marketing for dummies
Hemachandran et al. The business of the metaverse: How to maintain the human element within this new business reality
Legorburu et al. Storyscaping: Stop creating ads, start creating worlds
Mourdoukoutas et al. The seven principles of WOM and buzz marketing: Crossing the tipping point
Scott et al. Fanocracy: Turning fans into customers and customers into fans
George Digital marketing in tourism and hospitality
Clapperton This is social commerce: Turning social media into sales
Falco Customer engagement through gamification marketing in China: a focus on the luxury market.
Salt Social location marketing: outshining your competitors on Foursquare, Gowalla, Yelp & other location sharing sites
Shin et al. The Korean wave before and after a more digital post-pandemic era
Bailyn Outsmarting Social Media: Profiting in the Age of Friendship Marketing
Brogan et al. The impact equation: Are you making things happen or just making noise?
Baecker Ethical Tech Startup Guide
US20240330971A1 (en) Methods, systems, apparatuses, and devices for provisioning experiences for enhancing engagements

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18927544

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18927544

Country of ref document: EP

Kind code of ref document: A1